Weight Tying Biases Token Embeddings Towards the Output Space

arXiv cs.CL / 3/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates how weight tying (sharing parameters between input and output embedding/unembedding matrices) shapes the embedding space in language models, finding it aligns more with the unembedding/output space than with the input embeddings of comparable untied models.
  • It argues the tied matrix becomes biased toward output prediction because output gradients dominate early in training, rather than gradients needed to support input representation.
  • Using tuned lens analysis, the authors show this bias harms early-layer computations that feed into the residual stream, reducing their effectiveness.
  • The study provides causal evidence by showing that scaling input gradients during training reduces the unembedding bias, supporting the gradient-imbalance mechanism.
  • The results offer a mechanistic explanation for why weight tying can degrade performance at scale and suggest implications for smaller LLMs where the embedding matrix is a larger fraction of total parameters.

Abstract

Weight tying, i.e. sharing parameters between input and output embedding matrices, is common practice in language model design, yet its impact on the learned embedding space remains poorly understood. In this paper, we show that tied embedding matrices align more closely with output (unembedding) matrices than with input embeddings of comparable untied models, indicating that the shared matrix is shaped primarily for output prediction rather than input representation. This unembedding bias arises because output gradients dominate early in training. Using tuned lens analysis, we show this negatively affects early-layer computations, which contribute less effectively to the residual stream. Scaling input gradients during training reduces this bias, providing causal evidence for the role of gradient imbalance. This is mechanistic evidence that weight tying optimizes the embedding matrix for output prediction, compromising its role in input representation. These results help explain why weight tying can harm performance at scale and have implications for training smaller LLMs, where the embedding matrix contributes substantially to total parameter count.