On the Geometric Structure of Layer Updates in Deep Language Models

arXiv cs.AI / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes how hidden representations change between layers in deep language models, focusing on the geometric structure of layer updates rather than what is encoded internally.
  • It finds that across multiple architectures (including Transformers and state-space models), most of the full layer update aligns strongly with a dominant tokenwise component, while the residual part is geometrically distinct.
  • The residual component shows weaker alignment, larger angular deviation, and lower projection onto the dominant tokenwise subspace, indicating it is not simply a small correction.
  • The authors show that approximation error under a restricted tokenwise function class correlates strongly with output perturbations, with Spearman correlations often above 0.7 and up to 0.95 in larger models.
  • They propose an architecture-agnostic framework to probe the geometric and functional structure of layer updates in modern language models.

Abstract

We study the geometric structure of layer updates in deep language models. Rather than analyzing what information is encoded in intermediate representations, we ask how representations change from one layer to the next. We show that layerwise updates admit a decomposition into a dominant tokenwise component and a residual that is not captured by restricted tokenwise function classes. Across multiple architectures, including Transformers and state-space models, we find that the full layer update is almost perfectly aligned with the tokenwise component, while the residual exhibits substantially weaker alignment, larger angular deviation, and significantly lower projection onto the dominant tokenwise subspace. This indicates that the residual is not merely a small correction, but a geometrically distinct component of the transformation. This geometric separation has functional consequences: approximation error under the restricted tokenwise model is strongly associated with output perturbation, with Spearman correlations often exceeding 0.7 and reaching up to 0.95 in larger models. Together, these results suggest that most layerwise updates behave like structured reparameterizations along a dominant direction, while functionally significant computation is concentrated in a geometrically distinct residual component. Our framework provides a simple, architecture-agnostic method for probing the geometric and functional structure of layer updates in modern language models.