AI Navigate

Attention Sinks Are Provably Necessary in Softmax Transformers: Evidence from Trigger-Conditional Tasks

arXiv cs.LG / 3/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proves that computing a trigger-conditional behavior necessarily induces a sink in softmax self-attention due to normalization on the probability simplex, formalizing why attention tends to collapse onto a stable anchor to realize a default state.
  • It uses a concrete task: when a designated trigger token appears, the model must output the average of all preceding token representations, and otherwise return zero, linking the sink behavior to a real-world attention pattern.
  • The authors show that non-normalized ReLU attention can solve the same task without any sink, highlighting normalization as the fundamental driver of sink behavior.
  • Experiments demonstrate that softmax models develop strong sinks in both single-head and multi-head variants, while ReLU attention eliminates them, and the findings extend beyond the theoretically analyzed setting.

Abstract

Transformers often display an attention sink: probability mass concentrates on a fixed, content-agnostic position. We prove that computing a simple trigger-conditional behavior necessarily induces a sink in softmax self-attention models. Our results formalize a familiar intuition: normalization over a probability simplex must force attention to collapse onto a stable anchor to realize a default state (e.g., when the model needs to ignore the input). We instantiate this with a concrete task: when a designated trigger token appears, the model must return the average of all preceding token representations, and otherwise output zero, a task which mirrors the functionality of attention heads in the wild (Barbero et al., 2025; Guo et al., 2024). We also prove that non-normalized ReLU attention can solve the same task without any sink, confirming that the normalization constraint is the fundamental driver of sink behavior. Experiments validate our predictions and demonstrate they extend beyond the theoretically analyzed setting: softmax models develop strong sinks while ReLU attention eliminates them in both single-head and multi-head variants.