AI Navigate

Beyond Scalars: Evaluating and Understanding LLM Reasoning via Geometric Progress and Stability

arXiv cs.AI / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • TRACED is a new framework for evaluating LLM reasoning that uses geometric kinematics instead of traditional scalar probabilities.
  • It decomposes reasoning traces into Progress (displacement) and Stability (curvature) to reveal how reasoning unfolds over time.
  • The authors find that correct reasoning tends to produce high-progress, stable trajectories, while hallucinations correspond to low-progress, unstable patterns (stalled displacement with large curvature fluctuations), described as Hesitation Loops and Certainty Accumulation.
  • The probabilistic TRACED framework achieves competitive performance and improved robustness across diverse benchmarks, illustrating a bridge between geometry and cognition in LLMs.

Abstract

Evaluating LLM reliability via scalar probabilities often fails to capture the structural dynamics of reasoning. We introduce TRACED, a framework that assesses reasoning quality through theoretically grounded geometric kinematics. By decomposing reasoning traces into Progress (displacement) and Stability (curvature), we reveal a distinct topological divergence: correct reasoning manifests as high-progress, stable trajectories, whereas hallucinations are characterized by low-progress, unstable patterns (stalled displacement with high curvature fluctuations). Leveraging these signatures, our probabilistic framework achieves competitive performance and superior robustness across diverse benchmarks. Crucially, TRACED bridges geometry and cognition by mapping high curvature to ''Hesitation Loops'' and displacement to ''Certainty Accumulation'', offering a physical lens to decode the internal dynamics of machine thought.