LLM Reasoning with Process Rewards for Outcome-Guided Steps

arXiv cs.AI / 4/6/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that existing reinforcement-learning setups for LLM math often rely on outcome-only verification, which gives sparse feedback on multi-step reasoning and limited insight into intermediate mistakes.
  • It identifies a key risk with process reward models (PRMs): if used as absolute optimization targets, they can become misaligned with final correctness and incentivize “fluent but wrong” reasoning or reward hacking.
  • The authors propose PROGRS, a framework that uses PRM scores as relative preferences within outcome groups, making outcome correctness dominant while still leveraging denser intermediate-step supervision.
  • PROGRS introduces outcome-conditioned centering to remove systematic bias in PRM scores for incorrect trajectories, and pairs a frozen quantile-regression PRM with a multi-scale coherence evaluator.
  • Integrated into GRPO without extra objectives or trainable components, PROGRS improves Pass@1 on several math benchmarks (including MATH-500, AMC, AIME, MinervaMath, and OlympiadBench) and reaches stronger results with fewer rollouts.

Abstract

Mathematical reasoning in large language models has improved substantially with reinforcement learning using verifiable rewards, where final answers can be checked automatically and converted into reliable training signals. Most such pipelines optimize outcome correctness only, which yields sparse feedback for long, multi-step solutions and offers limited guidance on intermediate reasoning errors. Recent work therefore introduces process reward models (PRMs) to score intermediate steps and provide denser supervision. In practice, PRM scores are often imperfectly aligned with final correctness and can reward locally fluent reasoning that still ends in an incorrect answer. When optimized as absolute rewards, such signals can amplify fluent failure modes and induce reward hacking. We propose PROGRS, a framework that leverages PRMs while keeping outcome correctness dominant. PROGRS treats process rewards as relative preferences within outcome groups rather than absolute targets. We introduce outcome-conditioned centering, which shifts PRM scores of incorrect trajectories to have zero mean within each prompt group. It removes systematic bias while preserving informative rankings. PROGRS combines a frozen quantile-regression PRM with a multi-scale coherence evaluator. We integrate the resulting centered process bonus into Group Relative Policy Optimization (GRPO) without auxiliary objectives or additional trainable components. Across MATH-500, AMC, AIME, MinervaMath, and OlympiadBench, PROGRS consistently improves Pass@1 over outcome-only baselines and achieves stronger performance with fewer rollouts. These results show that outcome-conditioned centering enables safe and effective use of process rewards for mathematical reasoning.