Process Supervision of Confidence Margin for Calibrated LLM Reasoning

arXiv cs.LG / 4/28/2026

📰 NewsModels & Research

Key Points

  • The paper proposes an RL-based framework (RLCM) to improve LLM reasoning by optimizing both answer correctness and the reliability of confidence estimates.
  • Unlike reward methods that may push models toward overconfidence, RLCM uses a “confidence margin” to separate correct from incorrect steps within a single reasoning trajectory.
  • Experiments across mathematical, code, logic, and science benchmarks show substantially better calibration while keeping or improving overall accuracy.
  • The authors also demonstrate that calibrated confidence signals can improve downstream efficiency for conformal risk control and enable more effective confidence-weighted aggregation.

Abstract

Scaling test-time computation with reinforcement learning (RL) has emerged as a reliable path to improve large language models (LLM) reasoning ability. Yet, outcome-based reward often incentivizes models to be overconfident, leading to hallucinations, unreliable confidence-based control, and unnecessary compute allocation. We introduce Reinforcement Learning with Confidence Margin (\textbf{RLCM}), a calibration-aware RL framework that jointly optimizes correctness and confidence reliability via a margin-enhanced process reward over intermediate-budget completions. Rather than aligning confidence to correctness likelihoods, RLCM encourages to widen the confidence margin between correct and incorrect steps within a single reasoning trajectory. Across mathematical, code, logic and science benchmarks, our method substantially improves calibration while maintaining or improving accuracy. We further show that, with calibrated confidence signals, the resulting models enable more efficient conformal risk control and effective confidence-weighted aggregation.