AI Navigate

Lost in Backpropagation: The LM Head is a Gradient Bottleneck

arXiv cs.CL / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The LM head’s V-dimensional gradient backpropagates through a rank-D linear layer, causing unavoidable compression and altering the feedback used to train most parameters.
  • The authors quantify that 95-99% of the gradient norm is suppressed by the output layer, leading to suboptimal update directions.
  • They show the softmax bottleneck is both an expressivity bottleneck and an optimization bottleneck, not just a limitation of representational capacity.
  • Controlled pretraining experiments reveal that trivial patterns become unlearnable and training dynamics are drastically altered at scale.
  • The work argues for new LM head designs to address this intrinsic flaw and improve training efficiency across language models.

Abstract

The last layer of neural language models (LMs) projects output features of dimension D to logits in dimension V, the size of the vocabulary, where usually D \ll V. This mismatch is known to raise risks of limited expressivity in neural LMs, creating a so-called softmax bottleneck. We show the softmax bottleneck is not only an expressivity bottleneck but also an optimization bottleneck. Backpropagating V-dimensional gradients through a rank-D linear layer induces unavoidable compression, which alters the training feedback provided to the vast majority of the parameters. We present a theoretical analysis of this phenomenon and measure empirically that 95-99% of the gradient norm is suppressed by the output layer, resulting in vastly suboptimal update directions. We conduct controlled pretraining experiments showing that the gradient bottleneck makes trivial patterns unlearnable, and drastically affects the training dynamics of LLMs. We argue that this inherent flaw contributes to training inefficiencies at scale independently of the model architecture, and raises the need for new LM head designs.