Hidden States Know Where Reasoning Diverges: Credit Assignment via Span-Level Wasserstein Distance
arXiv cs.CL / 4/28/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes Span-level Hidden state Enabled Advantage Reweighting (SHEAR), a refinement to Group Relative Policy Optimization (GRPO) for reinforcement learning with verifiable rewards (RLVR) that improves fine-grained credit assignment beyond token-wide advantages.
- It argues that hidden-state distributions of correct vs. incorrect rollouts diverge specifically around spans where local reasoning quality differs, and that the Wasserstein distance between these span-level distributions tracks that divergence.
- The authors formalize the relationship with a separation theorem, showing that post-divergence spans exhibit larger Wasserstein distances than pre-divergence spans when the true distributional gap is sufficiently larger than finite-sample noise.
- SHEAR uses only outcome-level correctness labels (no step-level annotations and no extra reward-model training) by computing span-level Wasserstein distances to scale token-level advantages during training.
- Experiments on five mathematical reasoning benchmarks and five code generation benchmarks demonstrate improvements over standard GRPO and competitive results versus supervised process reward models without additional data or modeling.
Related Articles
LLMs will be a commodity
Reddit r/artificial

Indian Developers: How to Build AI Side Income with $0 Capital in 2026
Dev.to

What it feels like to have to have Qwen 3.6 or Gemma 4 running locally
Reddit r/LocalLLaMA

Dex lands $5.3M to grow its AI-driven talent matching platform
Tech.eu

AI Citation Registry: Why Daily Updates Leave No Time for Data Structuring
Dev.to