Calibration-Aware Policy Optimization for Reasoning LLMs

arXiv cs.LG / 4/15/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes why GRPO-style optimization for reasoning LLMs can worsen relative calibration, showing it stems from uncertainty-agnostic advantage estimation that misaligns optimization gradients with calibration objectives.
  • It introduces Calibration-Aware Policy Optimization (CAPO), which uses a logistic AUC surrogate loss with theoretically grounded consistency and regret bounds to enable uncertainty-aware advantage estimation.
  • CAPO adds a noise-masking mechanism to stabilize training while jointly improving calibration and reasoning accuracy.
  • Experiments on mathematical reasoning benchmarks report up to 15% calibration gains for CAPO-1.5B with accuracy comparable to or better than GRPO, plus up to 5% improvements on inference-time scaling tasks.
  • When the model is allowed to abstain on low-confidence outputs, CAPO achieves a Pareto-optimal precision–coverage trade-off, indicating potential for hallucination mitigation.

Abstract

Group Relative Policy Optimization (GRPO) enhances LLM reasoning but often induces overconfidence, where incorrect responses yield lower perplexity than correct ones, degrading relative calibration as described by the Area Under the Curve (AUC). Existing approaches either yield limited improvements in calibration or sacrifice gains in reasoning accuracy. We first prove that this degradation in GRPO-style algorithms stems from their uncertainty-agnostic advantage estimation, which inevitably misaligns optimization gradients with calibration. This leads to improved accuracy at the expense of degraded calibration. We then propose Calibration-Aware Policy Optimization (CAPO). It adopts a logistic AUC surrogate loss that is theoretically consistent and admits regret bound, enabling uncertainty-aware advantage estimation. By further incorporating a noise masking mechanism, CAPO achieves stable learning dynamics that jointly optimize calibration and accuracy. Experiments on multiple mathematical reasoning benchmarks show that CAPO-1.5B significantly improves calibration by up to 15% while achieving accuracy comparable to or better than GRPO, and further boosts accuracy on downstream inference-time scaling tasks by up to 5%. Moreover, when allowed to abstain under low-confidence conditions, CAPO achieves a Pareto-optimal precision-coverage trade-off, highlighting its practical value for hallucination mitigation.