AI Navigate

Attention Sinks Induce Gradient Sinks

arXiv cs.LG / 3/19/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates attention sinks and gradient sinks in Transformer models by analyzing backpropagation under causal masking.
  • It shows that attention sinks can induce pronounced gradient concentration, which the authors term gradient sinks.
  • In pre-norm architectures with RMSNorm, massive activations may be an adaptive response to localized gradient pressure during training.
  • They introduce V-scale, a modification that adjusts value-path backpropagated gradients, and show that pretrained V-scale models preserve attention sinks while suppressing massive activations.
  • The results support gradient sink as a key training-time mediator linking attention sinks and massive activations.

Abstract

Attention sinks and massive activations are recurring and closely related phenomena in Transformer models. Existing studies have largely focused on the forward pass, making it unclear whether their connection is direct or mediated by a training-time mechanism. We study this question from the perspective of backpropagation. Empirically and theoretically, we show that under causal mask, attention sinks can induce pronounced gradient concentration, which we term gradient sinks. Furthermore, in pre-norm architectures with RMSNorm, massive activations can be understood as an adaptive response to this localized gradient pressure during training. To test this hypothesis, we introduce V-scale, a modification that adjusts value-path backpropagated gradients. In pretrained V-scale models, attention sinks are preserved whereas massive activations are suppressed. These results support the interpretation that gradient sink is a key training-time mediator linking attention sinks and massive activations.