Stabilizing Efficient Reasoning with Step-Level Advantage Selection

arXiv cs.CL / 4/28/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The study shows that post-training LLMs for efficient reasoning using a shorter context window (with standard GRPO and no length-aware objective) can compress reasoning traces but can also destabilize training and reduce accuracy.
  • It identifies a key limitation in prior efficient-reasoning methods: they often use length-optimization or pruning while still being post-trained under different (shorter) context conditions than the base model.
  • To improve stability and outcomes, the authors propose Step-level Advantage Selection (SAS), which assigns advantages at the level of individual reasoning steps based on confidence and rollout outcomes.
  • SAS gives zero advantage to low-confidence steps within correct rollouts and to high-confidence steps within verifier-failed rollouts, aiming to better handle failures caused by truncation or verifier issues rather than flawed reasoning.
  • Experiments across mathematical and general reasoning benchmarks show SAS boosts average Pass@1 accuracy by 0.86 points over the best length-aware baseline while reducing average reasoning length by 16.3%, improving the accuracy–efficiency balance.

Abstract

Large language models (LLMs) achieve strong reasoning performance by allocating substantial computation at inference time, often generating long and verbose reasoning traces. While recent work on efficient reasoning reduces this overhead through length-based rewards or pruning, many approaches are post-trained under a much shorter context window than base-model training, a factor whose effect has not been systematically isolated. We first show that short-context post-training alone, using standard GRPO without any length-aware objective, already induces substantial reasoning compression-but at the cost of increasingly unstable training dynamics and accuracy degradation. To address this, we propose Step-level Advantage Selection (SAS), which operates at the reasoning-step level and assigns a zero advantage to low-confidence steps in correct rollouts and to high-confidence steps in verifier-failed rollouts, where failures often arise from truncation or verifier issues rather than incorrect reasoning. Across diverse mathematical and general reasoning benchmarks, SAS improves average Pass@1 accuracy by 0.86 points over the strongest length-aware baseline while reducing average reasoning length by 16.3%, yielding a better accuracy-efficiency trade-off.