Online Reasoning Calibration: Test-Time Training Enables Generalizable Conformal LLM Reasoning

arXiv cs.LG / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Online Reasoning Calibration (ORCA), which uses conformal prediction combined with test-time training to calibrate how LLMs sample at inference time, addressing miscalibration and compute inefficiency.
  • ORCA employs a meta-learning procedure that updates a calibration module per input, enabling more reliable confidence estimates when reasoning patterns or prompt distributions shift across stages or between development and deployment.
  • It provides theoretical guarantees (conformal risk control) and shows empirically improved efficiency and generalization across multiple reasoning tasks compared with static calibration baselines.
  • At δ=0.1, ORCA boosts Qwen2.5-32B efficiency with up to 47.5% savings using supervised labels and 40.7% savings using self-consistency labels on in-distribution tasks.
  • In zero-shot out-of-domain evaluation, it raises MATH-500 savings from 24.8% (static calibration) to 67.0% while keeping empirical error low, with consistent trends across model families and downstream benchmarks, and the code is publicly available.

Abstract

While test-time scaling has enabled large language models to solve highly difficult tasks, state-of-the-art results come at exorbitant compute costs. These inefficiencies can be attributed to the miscalibration of post-trained language models, and the lack of calibration in popular sampling techniques. Here, we present Online Reasoning Calibration (ORCA), a framework for calibrating the sampling process that draws upon conformal prediction and test-time training. Specifically, we introduce a meta-learning procedure that updates the calibration module for each input. This allows us to provide valid confidence estimates under distributional shift, e.g. in thought patterns that occur across different stages of reasoning, or in prompt distributions between model development and deployment. ORCA not only provides theoretical guarantees on conformal risks, but also empirically shows higher efficiency and generalization across different reasoning tasks. At risk level \delta=0.1, ORCA improves Qwen2.5-32B efficiency on in-distribution tasks with savings up to 47.5% with supervised labels and 40.7% with self-consistency labels. Under zero-shot out-of-domain settings, it improves MATH-500 savings from 24.8% of the static calibration baseline to 67.0% while maintaining a low empirical error rate, and the same trend holds across model families and downstream benchmarks. Our code is publicly available at https://github.com/wzekai99/ORCA.