Accurate and Reliable Uncertainty Estimates for Deterministic Predictions Extensions to Under and Overpredictions

arXiv stat.ML / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses how to produce accurate and reliable probabilistic uncertainty estimates for computational models that make deterministic predictions, which is important for high-stakes engineering and scientific decisions.
  • It extends the ACCRUE framework to learn input-dependent, non-Gaussian uncertainty distributions rather than relying on restrictive assumptions like Gaussian errors.
  • The proposed method models uncertainty using two-piece Gaussian and asymmetric Laplace forms, aiming to capture asymmetry and heavy-tailed behavior while staying flexible.
  • A neural network is trained with a loss function designed to balance predictive accuracy and reliability of the resulting uncertainty estimates.
  • Experiments on synthetic and real-world data indicate the approach better captures input-dependent uncertainty structure and improves probabilistic forecasts compared with existing methods, without requiring computationally expensive sampling.

Abstract

Computational models support high-stakes decisions across engineering and science, and practitioners increasingly seek probabilistic predictions to quantify uncertainty in such models. Existing approaches generate predictions either by sampling input parameter distributions or by augmenting deterministic outputs with uncertainty representations, including distribution-free and distributional methods. However, sampling-based methods are often computationally prohibitive for real-time applications, and many existing uncertainty representations either ignore input dependence or rely on restrictive Gaussian assumptions that fail to capture asymmetry and heavy-tailed behavior. Therefore, we extend the ACCurate and Reliable Uncertainty Estimate (ACCRUE) framework to learn input-dependent, non-Gaussian uncertainty distributions, specifically two-piece Gaussian and asymmetric Laplace forms, using a neural network trained with a loss function that balances predictive accuracy and reliability. Through synthetic and real-world experiments, we show that the proposed approach captures an input-dependent uncertainty structure and improves probabilistic forecasts relative to existing methods, while maintaining flexibility to model skewed and non-Gaussian errors.