Possibilistic Predictive Uncertainty for Deep Learning

arXiv cs.AI / 5/4/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces DAPPr (Dirichlet-approximated possibilistic posterior predictions) to model epistemic uncertainty for deep neural networks that can be overconfident on unseen inputs.
  • It proposes a principled possibility-theory-based framework: define a possibilistic posterior over parameters, project it into prediction space using supremum operators, and approximate the result with learnable Dirichlet “possibility functions.”
  • By combining projection and approximation, the method yields a simple training objective with closed-form solutions, aiming to avoid the computational burden of full Bayesian approaches.
  • Experiments on multiple benchmarks show that DAPPr delivers competitive or better uncertainty quantification than state-of-the-art evidential deep learning methods while remaining computationally efficient.
  • The authors plan to release code at the provided GitHub repository to support reproducibility and adoption.

Abstract

Deep neural networks achieve impressive results across diverse applications, yet their overconfidence on unseen inputs necessitates reliable epistemic uncertainty modelling. Existing methods for uncertainty modelling face a fundamental dilemma: Bayesian approaches provide principled estimates but remain computationally prohibitive, while efficient second-order predictors lack rigorous derivations connecting their specific objectives to epistemic uncertainty quantification. To resolve this dilemma, we introduce Dirichlet-approximated possibilistic posterior predictions (DAPPr), a principled framework leveraging possibility theory. We define a possibilistic posterior over parameters, projects this posterior to the prediction space via supremum operators, and approximates the projected posterior using learnable Dirichlet possibility functions. This projection-and-approximation strategy yields a simple training objective with closed-form solutions. Extensive experiments across diverse benchmarks demonstrate that our approach achieves competitive or superior uncertainty quantification performance compared to state-of-the-art evidential deep learning methods while maintaining both principled derivation and computational efficiency. Code will be available at https://github.com/MaxwellYaoNi/DAPPr.