Policy Improvement Reinforcement Learning

arXiv cs.LG / 4/2/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that common RL-with-verifiable-rewards approaches are open-loop and can drift or collapse because they optimize from batch/group reward statistics without verifying whether updates actually improve the model.
  • It introduces Policy Improvement Reinforcement Learning (PIRL), reframing post-training as an explicit objective to maximize cumulative policy improvement across iterations and proving alignment with final task performance.
  • It further proposes Policy Improvement Policy Optimization (PIPO), a closed-loop method that uses retrospective verification against a sliding-window baseline to reinforce beneficial updates and suppress harmful ones.
  • The authors provide theoretical results that PIPO performs ascent on the PIRL objective in expectation and report experiments on mathematical reasoning benchmarks showing improved stability and performance versus GRPO and related variants.

Abstract

Reinforcement Learning with Verifiable Rewards (RLVR) has become a central post-training paradigm for improving the reasoning capabilities of large language models. Yet existing methods share a common blind spot: they optimize policies based on instantaneous group-level or batch-level statistics without ever verifying whether the resulting update actually improved the model. This open-loop design -- updating in isolation at each step, guided only by within-group (batch) reward signals -- means optimization can drift or collapse with no mechanism to detect and correct these failures. We argue that the missing ingredient is policy improvement feedback: the ability to measure and optimize inter-iteration progress directly. To this end, we introduce Policy Improvement Reinforcement Learning (PIRL), a framework that replaces surrogate reward maximization with the explicit objective of maximizing cumulative policy improvement across iterations, and prove this temporal objective is perfectly aligned with maximizing final task performance. Building on PIRL, we propose Policy Improvement Policy Optimization (PIPO), which implements closed-loop optimization through retrospective verification. At each iteration, PIPO evaluates whether the previous update yielded genuine improvement against a sliding-window historical baseline, then actively reinforces beneficial updates and suppresses the harmful ones -- transforming an open-loop process into a self-correcting one. We provide theoretical analysis showing that PIPO performs ascent on the PIRL objective in expectation, and experiments on mathematical reasoning benchmarks demonstrate improved stability and performance over GRPO and its variants.