Attention Sink in Transformers: A Survey on Utilization, Interpretation, and Mitigation

arXiv cs.LG / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The article introduces a first comprehensive survey on “Attention Sink” (AS) in Transformers, focusing on why models disproportionately attend to a small set of uninformative tokens.
  • It explains how AS affects both training and inference dynamics, making Transformer interpretability harder and potentially worsening downstream issues like hallucinations.
  • The survey organizes existing AS research into three dimensions: fundamental utilization (where/how AS appears and is leveraged), mechanistic interpretation (why it happens), and strategic mitigation (how to reduce negative effects).
  • By consolidating concepts and the evolution/trends of the field, the paper aims to serve as a reference for researchers and practitioners to manage AS under today’s Transformer paradigm.
  • It also points readers to a curated list of related resources via the provided GitHub repository (“Awesome-Attention-Sink”).

Abstract

As the foundational architecture of modern machine learning, Transformers have driven remarkable progress across diverse AI domains. Despite their transformative impact, a persistent challenge across various Transformers is Attention Sink (AS), in which a disproportionate amount of attention is focused on a small subset of specific yet uninformative tokens. AS complicates interpretability, significantly affecting the training and inference dynamics, and exacerbates issues such as hallucinations. In recent years, substantial research has been dedicated to understanding and harnessing AS. However, a comprehensive survey that systematically consolidates AS-related research and offers guidance for future advancements remains lacking. To address this gap, we present the first survey on AS, structured around three key dimensions that define the current research landscape: Fundamental Utilization, Mechanistic Interpretation, and Strategic Mitigation. Our work provides a pivotal contribution by clarifying key concepts and guiding researchers through the evolution and trends of the field. We envision this survey as a definitive resource, empowering researchers and practitioners to effectively manage AS within the current Transformer paradigm, while simultaneously inspiring innovative advancements for the next generation of Transformers. The paper list of this work is available at https://github.com/ZunhaiSu/Awesome-Attention-Sink.