GRPO-VPS: Enhancing Group Relative Policy Optimization with Verifiable Process Supervision for Effective Reasoning

arXiv cs.LG / 4/23/2026

📰 NewsModels & Research

Key Points

  • The paper introduces an extension to Group Relative Policy Optimization (GRPO) that improves LLM reasoning by adding verifiable process supervision rather than relying on learned reward models.
  • It addresses GRPO’s weak credit assignment for intermediate reasoning steps by dividing generation into discrete segments and measuring segment-wise progress using conditional probabilities of the correct answer.
  • The method is model-free and uses verifiable belief/probability tracking along the reasoning trajectory, avoiding expensive intermediate supervision from Monte Carlo rollouts or auxiliary models.
  • Experiments on mathematical and general-domain benchmarks show consistent improvements over GRPO, including higher accuracy and shorter reasoning lengths, indicating both effectiveness and generalization across models.

Abstract

Reinforcement Learning with Verifiable Rewards (RLVR) has advanced the reasoning capabilities of Large Language Models (LLMs) by leveraging direct outcome verification instead of learned reward models. Building on this paradigm, Group Relative Policy Optimization (GRPO) eliminates the need for critic models but suffers from indiscriminate credit assignment for intermediate steps, which limits its ability to identify effective reasoning strategies and incurs overthinking. In this work, we introduce a model-free and verifiable process supervision via probing the model's belief in the correct answer throughout its reasoning trajectory. By segmenting the generation into discrete steps and tracking the conditional probability of the correct answer appended at each segment boundary, we efficiently compute interpretable segment-wise progress measurements to refine GRPO's trajectory-level feedback. This approach enables more targeted and sample-efficient policy updates, while avoiding the need for intermediate supervision derived from costly Monte Carlo rollouts or auxiliary models. Experiments on mathematical and general-domain benchmarks show consistent gains over GRPO across diverse models: up to 2.6-point accuracy improvements and 13.7% reasoning-length reductions on math tasks, and up to 2.4 points and 4% on general-domain tasks, demonstrating strong generalization.