Enhancing Safety of Large Language Models via Embedding Space Separation

arXiv cs.AI / 2026/3/24

💬 オピニオンIdeas & Deep AnalysisModels & Research

要点

  • The paper addresses LLM safety by leveraging findings that embeddings of harmful vs. safe queries are often linearly separable, which has enabled attacks that move harmful representations toward safe ones.
  • It introduces a fine-tuning method called Embedding Space Separation (ES2) that improves safety by explicitly increasing the distance between harmful and safe representations in the embedding space.
  • To avoid harming the model’s overall abilities, the method adds a KL-divergence regularization term that keeps the fine-tuned model’s logits aligned with the base model on harmless inputs.
  • Experiments on multiple open-source LLMs using standard safety benchmarks show substantial safety improvements while preserving general capabilities.

Abstract

Large language models (LLMs) have achieved impressive capabilities, yet ensuring their safety against harmful prompts remains a critical challenge. Recent work has revealed that the latent representations (embeddings) of harmful and safe queries in LLMs typically exhibit linear separability, a property that has been exploited to construct attacks by perturbing the embeddings of harmful queries towards the safe subspace. Motivated by this observation, we propose a representation-level fine-tuning approach, named Embedding Space Separation (ES2), which improves LLM safety by explicitly enlarging the distance between harmful and safe representations in the embedding space. To prevent degradation of model's general capabilities, we introduce a Kullback-Leibler (KL) divergence regularization term into the loss function, which constrains the logits of the fine-tuned model to align with those of the original base model on harmless inputs. We evaluate our method on several open-source LLMs using standard safety benchmarks. Extensive experimental results demonstrate that our approach substantially improves model safety while maintaining comparable general capabilities.