Ensemble-Based Dirichlet Modeling for Predictive Uncertainty and Selective Classification
arXiv stat.ML / 4/8/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- Cross-entropy-trained neural classifiers are accurate but do not directly provide reliable predictive uncertainty, and softmax scores for the correct class can vary across independent training runs.
- The paper proposes an ensemble-based Dirichlet parameter estimation method that uses a method-of-moments estimator (optionally followed by a maximum-likelihood refinement) to produce explicit Dirichlet predictive distributions.
- By deriving uncertainty from ensembles of softmax outputs, the approach avoids the sensitivity of Evidential Deep Learning to evidential loss design choices such as loss formulation, priors, and activations.
- Experiments across multiple datasets indicate that the ensemble-derived Dirichlet uncertainty is more stable and improves uncertainty-guided downstream tasks.
- The authors demonstrate better performance in applications like prediction confidence scoring and selective classification, where uncertainty estimates drive decision-making.
Related Articles

Meta's latest model is as open as Zuckerberg's private school
The Register

Why multi-agent AI security is broken (and the identity patterns that actually work)
Dev.to
BANKING77-77: New best of 94.61% on the official test set (+0.13pp) over our previous tests 94.48%.
Reddit r/artificial
A Comprehensive Implementation Guide to ModelScope for Model Search, Inference, Fine-Tuning, Evaluation, and Export
MarkTechPost

Harness Engineering: The Next Evolution of AI Engineering
Dev.to