Attention Is Where You Attack

arXiv cs.AI / 5/4/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces the Attention Redistribution Attack (ARA), a white-box method that bypasses safety-aligned LLMs by redirecting attention away from safety-critical positions using nonsemantic adversarial tokens.
  • Instead of attacking model outputs or logits like many prior jailbreak approaches, ARA manipulates the geometry of softmax attention on the probability simplex by optimizing token choices over targeted attention heads via Gumbel-softmax.
  • Experiments on LLaMA-3-8B-Instruct, Mistral-7B-Instruct-v0.1, and Gemma-2-9B-it show ARA can achieve substantial attack success rates (e.g., up to 36% ASR on Mistral-7B and 30% on LLaMA-3) with very few tokens and limited optimization steps.
  • The study finds a key mechanistic dissociation: ablating (zeroing) the most important safety heads causes minimal refusal flips, while attention redistribution in the same safety-heavy layers causes many more refusals to be overturned, implying safety behavior arises from attention routing rather than removable head modules.
  • Gemma-2 demonstrates strong resistance in the reported results, with ARA success staying at about 1%, highlighting model-dependent differences in safety mechanism vulnerability.

Abstract

Safety-aligned large language models rely on RLHF and instruction tuning to refuse harmful requests, yet the internal mechanisms implementing safety behavior remain poorly understood. We introduce the Attention Redistribution Attack (ARA), a white-box adversarial attack that identifies safety-critical attention heads and crafts nonsemantic adversarial tokens that redirect attention away from safety-relevant positions. Unlike prior jailbreak methods operating at the semantic or output-logit level, ARA targets the geometry of softmax attention on the probability simplex using Gumbel-softmax optimization over targeted heads. Across LLaMA-3-8B-Instruct, Mistral-7B-Instruct-v0.1, and Gemma-2-9B-it, ARA bypasses safety alignment with as few as 5 tokens and 500 optimization steps, achieving 36% ASR on Mistral-7B and 30% on LLaMA-3 against 200 HarmBench prompts, while Gemma-2 remains at 1%. Our principal mechanistic finding is a dissociation between ablation and redistribution: zeroing out the top-ranked safety heads produces at most 1 flip among 39 to 50 baseline refusals, while ARA targeting the corresponding safety-heavy layers flips 72/200 prompts on Mistral-7B and 60/200 on LLaMA-3. This suggests that safety is not localized in these heads as removable components, but emerges from the attention routing they perform. Removing a head allows compensation through the residual stream, while redirecting its attention propagates a corrupted signal downstream.

Attention Is Where You Attack | AI Navigate