AI Navigate

How Uncertainty Estimation Scales with Sampling in Reasoning Models

arXiv cs.AI / 3/20/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The study investigates uncertainty estimation for reasoning language models under extended chain-of-thought using parallel sampling, verbalized confidence, and self-consistency as signals.
  • It evaluates across three reasoning models and 17 tasks spanning mathematics, STEM, and humanities to characterize how these uncertainty signals scale with sampling.
  • The results show that both self-consistency and verbalized confidence improve with sampling, but self-consistency has lower initial discrimination and trails verbalized confidence under moderate sampling; the key gains come from combining signals.
  • A hybrid estimator using just two samples increases AUROC by up to +12 on average and outperforms either signal alone even with larger budgets, though returns diminish at scale.
  • The effects are domain-dependent, with mathematics showing higher uncertainty quality, stronger complementarity, and faster scaling than STEM or humanities.

Abstract

Uncertainty estimation is critical for deploying reasoning language models, yet remains poorly understood under extended chain-of-thought reasoning. We study parallel sampling as a fully black-box approach using verbalized confidence and self-consistency. Across three reasoning models and 17 tasks spanning mathematics, STEM, and humanities, we characterize how these signals scale. Both self-consistency and verbalized confidence scale in reasoning models, but self-consistency exhibits lower initial discrimination and lags behind verbalized confidence under moderate sampling. Most uncertainty gains, however, arise from signal combination: with just two samples, a hybrid estimator improves AUROC by up to +12 on average and already outperforms either signal alone even when scaled to much larger budgets, after which returns diminish. These effects are domain-dependent: in mathematics, the native domain of RLVR-style post-training, reasoning models achieve higher uncertainty quality and exhibit both stronger complementarity and faster scaling than in STEM or humanities.