Evaluating the Quality of the Quantified Uncertainty for (Re)Calibration of Data-Driven Regression Models

arXiv stat.ML / 4/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that safety-critical regression models must provide not only accurate predictions but also reliable uncertainty estimates via calibration for risk-aware decisions.
  • It reviews and categorizes existing regression calibration metrics and benchmarks them in a way that is independent of specific modeling or recalibration methods.
  • Controlled experiments across real, synthetic, and intentionally miscalibrated datasets show that calibration metrics often conflict, even yielding contradictory conclusions for the same recalibration outcome.
  • The authors warn that these inconsistencies can enable cherry-picking metrics to produce misleading claims of success, underscoring the importance of careful metric selection.
  • In their tests, Expected Normalized Calibration Error (ENCE) and Coverage Width-based Criterion (CWC) are identified as the most dependable calibration metrics.

Abstract

In safety-critical applications data-driven models must not only be accurate but also provide reliable uncertainty estimates. This property, commonly referred to as calibration, is essential for risk-aware decision-making. In regression a wide variety of calibration metrics and recalibration methods have emerged. However, these metrics differ significantly in their definitions, assumptions and scales, making it difficult to interpret and compare results across studies. Moreover, most recalibration methods have been evaluated using only a small subset of metrics, leaving it unclear whether improvements generalize across different notions of calibration. In this work, we systematically extract and categorize regression calibration metrics from the literature and benchmark these metrics independently of specific modelling methods or recalibration approaches. Through controlled experiments with real-world, synthetic and artificially miscalibrated data, we demonstrate that calibration metrics frequently produce conflicting results. Our analysis reveals substantial inconsistencies: many metrics disagree in their evaluation of the same recalibration result, and some even indicate contradictory conclusions. This inconsistency is particularly concerning as it potentially allows cherry-picking of metrics to create misleading impressions of success. We identify the Expected Normalized Calibration Error (ENCE) and the Coverage Width-based Criterion (CWC) as the most dependable metrics in our tests. Our findings highlight the critical role of metric selection in calibration research.