Latent Semantic Manifolds in Large Language Models

arXiv cs.AI / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a geometric framework viewing LLM hidden states as points on a latent semantic manifold (a Riemannian submanifold with the Fisher information metric), where discrete tokens correspond to Voronoi regions on that manifold.
  • It introduces the “expressibility gap” as a geometric measure of how much semantics are distorted by mapping continuous internal representations to a finite vocabulary, and proves rate-distortion lower bounds plus a linear volume scaling law using the coarea formula.
  • Experiments across six transformer architectures (124M–1.5B parameters) reportedly validate universal hourglass-like intrinsic-dimension profiles and consistent curvature structure.
  • The study finds linear scaling of the expressibility gap with vocabulary discretization (slopes 0.87–1.12, R^2 > 0.985) and identifies a persistent “hard core” of boundary-proximal representations that helps decompose perplexity.
  • The authors discuss downstream implications for architecture design, model compression, decoding strategies, and broader scaling laws for LLMs.

Abstract

Large Language Models (LLMs) perform internal computations in continuous vector spaces yet produce discrete tokens -- a fundamental mismatch whose geometric consequences remain poorly understood. We develop a mathematical framework that interprets LLM hidden states as points on a latent semantic manifold: a Riemannian submanifold equipped with the Fisher information metric, where tokens correspond to Voronoi regions partitioning the manifold. We define the expressibility gap, a geometric measure of the semantic distortion from vocabulary discretization, and prove two theorems: a rate-distortion lower bound on distortion for any finite vocabulary, and a linear volume scaling law for the expressibility gap via the coarea formula. We validate these predictions across six transformer architectures (124M-1.5B parameters), confirming universal hourglass intrinsic dimension profiles, smooth curvature structure, and linear gap scaling with slopes 0.87-1.12 (R^2 > 0.985). The margin distribution across models reveals a persistent hard core of boundary-proximal representations invariant to scale, providing a geometric decomposition of perplexity. We discuss implications for architecture design, model compression, decoding strategies, and scaling laws