EVPO: Explained Variance Policy Optimization for Adaptive Critic Utilization in LLM Post-Training

arXiv cs.LG / 4/22/2026

📰 NewsModels & Research

Key Points

  • The paper addresses a key RL design choice in LLM post-training: whether to rely on a learned critic as a baseline for policy optimization, which can affect variance behavior in sparse-reward regimes.
  • It argues that in sparse rewards, a learned critic may add estimation noise that outweighs the state signal, thereby increasing (not reducing) advantage variance, and it provides a unified Kalman-filtering view of PPO vs. critic-free GRPO.
  • By framing baseline selection via explained variance (EV), the authors derive a batch-computable criterion: positive EV means the critic reduces variance, while zero/negative EV indicates the critic inflates variance.
  • They propose Explained Variance Policy Optimization (EVPO), which adaptively switches between critic-based and batch-mean advantage estimation at each step based on EV, guaranteeing no worse variance than the better option at that step.
  • Experiments across four task types (classical control, agentic interaction, and mathematical reasoning) show EVPO consistently outperforms both PPO and GRPO, with additional evidence that EV-based gating tracks critic maturation and that the EV zero threshold is empirically optimal.

Abstract

Reinforcement learning (RL) for LLM post-training faces a fundamental design choice: whether to use a learned critic as a baseline for policy optimization. Classical theory favors critic-based methods such as PPO for variance reduction, yet critic-free alternatives like GRPO have gained widespread adoption due to their simplicity and competitive performance. We show that in sparse-reward settings, a learned critic can inject estimation noise that exceeds the state signal it captures, increasing rather than reducing advantage variance. By casting baseline selection as a Kalman filtering problem, we unify PPO and GRPO as two extremes of the Kalman gain and prove that explained variance (EV), computable from a single training batch, identifies the exact boundary: positive EV indicates the critic reduces variance, while zero or negative EV signals that it inflates variance. Building on this insight, we propose Explained Variance Policy Optimization (EVPO), which monitors batch-level EV at each training step and adaptively switches between critic-based and batch-mean advantage estimation, provably achieving no greater variance than the better of the two at every step. Across four tasks spanning classical control, agentic interaction, and mathematical reasoning, EVPO consistently outperforms both PPO and GRPO regardless of which fixed baseline is stronger on a given task. Further analysis confirms that the adaptive gating tracks critic maturation over training and that the theoretically derived zero threshold is empirically optimal.