Hidden States Know Where Reasoning Diverges: Credit Assignment via Span-Level Wasserstein Distance

arXiv cs.CL / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Span-level Hidden state Enabled Advantage Reweighting (SHEAR), a refinement to Group Relative Policy Optimization (GRPO) for reinforcement learning with verifiable rewards (RLVR) that improves fine-grained credit assignment beyond token-wide advantages.
  • It argues that hidden-state distributions of correct vs. incorrect rollouts diverge specifically around spans where local reasoning quality differs, and that the Wasserstein distance between these span-level distributions tracks that divergence.
  • The authors formalize the relationship with a separation theorem, showing that post-divergence spans exhibit larger Wasserstein distances than pre-divergence spans when the true distributional gap is sufficiently larger than finite-sample noise.
  • SHEAR uses only outcome-level correctness labels (no step-level annotations and no extra reward-model training) by computing span-level Wasserstein distances to scale token-level advantages during training.
  • Experiments on five mathematical reasoning benchmarks and five code generation benchmarks demonstrate improvements over standard GRPO and competitive results versus supervised process reward models without additional data or modeling.

Abstract

Group Relative Policy Optimization (GRPO) performs coarse-grained credit assignment in reinforcement learning with verifiable rewards (RLVR) by assigning the same advantage to all tokens in a rollout. Process reward models can provide finer-grained supervision, but they require step-level annotation or additional reward modeling. We show that hidden-state distributions contain a useful signal for local reasoning quality that can be extracted using only outcome-level correctness labels available in RLVR. Specifically, within each GRPO group, the Wasserstein distance between span-level hidden state distributions of correct and incorrect rollouts increases around regions where their local reasoning quality diverges. This association holds both across examples and within individual trajectories, suggesting that hidden-state distributional divergence can serve as a self-supervision signal for fine-grained credit assignment. We formalize this observation with a separation theorem showing that, under mild structural assumptions, post-divergence spans have larger Wasserstein distances than pre-divergence spans whenever the population-level distributional gap exceeds finite-sample noise. Motivated by this result, we propose \textbf{S}pan-level \textbf{H}idden state \textbf{E}nabled \textbf{A}dvantage \textbf{R}eweighting (SHEAR), which modifies GRPO by using span-level Wasserstein distances to scale token-level advantages, amplifying updates on tokens whose hidden states are more separated from the opposing group. The method requires no additional model and only minimal changes to the training pipeline. Experiments on five mathematical reasoning benchmarks and five code generation benchmarks show improvements over standard GRPO and strong performance relative to supervised process reward models, while requiring no additional annotation or reward model training.