LASA: Language-Agnostic Semantic Alignment at the Semantic Bottleneck for LLM Safety

arXiv cs.LG / 4/15/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that LLM safety gaps between high- and low-resource languages come from language-dominant safety alignment that does not match the model’s language-agnostic semantic understanding.
  • It identifies a “semantic bottleneck” layer where representation geometry is driven more by shared semantics than by language identity.
  • Building on this, LASA (Language-Agnostic Semantic Alignment) anchors safety alignment at the semantic bottleneck rather than relying on surface-text cues.
  • Experiments report a major reduction in average attack success rate, e.g., from 24.7% to 2.8% on LLaMA-3.1-8B-Instruct, and maintaining roughly 3–4% ASR across Qwen2.5/Qwen3 Instruct models (7B–32B).
  • The work reframes LLM safety alignment as a representation-level problem, emphasizing alignment in the model’s language-agnostic semantic space for robust multilingual safety.

Abstract

Large language models (LLMs) often demonstrate strong safety performance in high-resource languages, yet exhibit severe vulnerabilities when queried in low-resource languages. We attribute this gap to a mismatch between language-agnostic semantic understanding ability and language-dominant safety alignment biased toward high-resource languages. Consistent with this hypothesis, we empirically identify the semantic bottleneck in LLMs, an intermediate layer in which the geometry of model representations is governed primarily by shared semantic content rather than language identity. Building on this observation, we propose Language-Agnostic Semantic Alignment (LASA), which anchors safety alignment directly in semantic bottlenecks. Experiments show that LASA substantially improves safety across all languages: average attack success rate (ASR) drops from 24.7% to 2.8% on LLaMA-3.1-8B-Instruct and remains around 3-4% across Qwen2.5 and Qwen3 Instruct models (7B-32B). Together, our analysis and method offer a representation-level perspective on LLM safety, suggesting that safety alignment requires anchoring safety understanding not in surface text, but in the model's language-agnostic semantic space.