Near-Future Policy Optimization

arXiv cs.LG / 4/23/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a key bottleneck in reinforcement learning with verifiable rewards (RLVR): obtaining off-policy trajectories that are both strong enough to raise the learning ceiling and close enough to be effectively absorbed during on-policy exploration.
  • It proposes Near-Future Policy Optimization (NPO), a mixed-policy method that generates auxiliary trajectories from the model’s own “near-future” checkpoints from the same training run, balancing trajectory quality against variance cost.
  • The authors introduce AutoNPO, an adaptive variant that monitors online training signals to decide when to apply interventions and automatically selects the guide checkpoint that maximizes the effective learning signal S = Q/V.
  • Experiments on Qwen3-VL-8B-Instruct with GRPO show performance gains: NPO improves average results from 57.88 to 62.84, while AutoNPO further reaches 63.15 and accelerates convergence.

Abstract

Reinforcement learning with verifiable rewards (RLVR) has become a core post-training recipe. Introducing suitable off-policy trajectories into on-policy exploration accelerates RLVR convergence and raises the performance ceiling, yet finding a source of such trajectories remains the key challenge. Existing mixed-policy methods either import trajectories from external teachers (high-quality but distributionally far) or replay past training trajectories (close but capped in quality), and neither simultaneously satisfies the strong enough (higher Q , more new knowledge to learn) and close enough (lower V , more readily absorbed) conditions required to maximize the effective learning signal \mathcal{S} = Q/V. We propose \textbf{N}ear-Future \textbf{P}olicy \textbf{O}ptimization (\textbf{NPO}), a simple mixed-policy scheme that learns from a policy's own near-future self: a later checkpoint from the same training run is a natural source of auxiliary trajectories that is both stronger than the current policy and closer than any external source, directly balancing trajectory quality against variance cost. We validate NPO through two manual interventions, early-stage bootstrapping and late-stage plateau breakthrough, and further propose \textbf{AutoNPO},an adaptive variant that automatically triggers interventions from online training signals and selects the guide checkpoint that maximizes S. On Qwen3-VL-8B-Instruct with GRPO, NPO improves average performance from 57.88 to 62.84, and AutoNPO pushes it to 63.15, raising the final performance ceiling while accelerating convergence.