Improving Semantic Uncertainty Quantification in Language Model Question-Answering via Token-Level Temperature Scaling

arXiv cs.LG / 4/9/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that semantic uncertainty quantification in language model QA has been under-addressed by focusing mainly on discrimination rather than calibration.
  • It evaluates both calibration and discrimination across multiple confidence measures and finds that common fixed-temperature heuristics yield systematically miscalibrated, weakly discriminative confidence distributions.
  • The authors propose optimizing a single scalar temperature as an inductive-bias-friendly, simple method for token-level temperature scaling.
  • Extensive experiments show that this scalar temperature scaling improves semantic calibration and discrimination, and also improves downstream entropy on question-answering tasks.
  • The method reportedly outperforms heuristic baselines and more expressive token-level recalibration approaches in the evaluated QA settings.

Abstract

Calibration is central to reliable semantic uncertainty quantification, yet prior work has largely focused on discrimination, neglecting calibration. As calibration and discrimination capture distinct aspects of uncertainty, focusing on discrimination alone yields an incomplete picture. We address this gap by systematically evaluating both aspects across a broad set of confidence measures. We show that current approaches, particularly fixed-temperature heuristics, produce systematically miscalibrated and poorly discriminative semantic confidence distributions. We demonstrate that optimising a single scalar temperature, which, we argue, provides a suitable inductive bias, is a surprisingly simple yet effective solution. Our exhaustive evaluation confirms that temperature scaling consistently improves semantic calibration, discrimination, and downstream entropy, outperforming both heuristic baselines and more expressive token-level recalibration methods on question-answering tasks.