Credal Concept Bottleneck Models for Epistemic-Aleatoric Uncertainty Decomposition
arXiv cs.AI / 4/28/2026
📰 NewsModels & Research
Key Points
- The paper introduces CREDENCE, a new Concept Bottleneck Model (CBM) framework that decomposes concept-level uncertainty into epistemic (reducible) and aleatoric (irreducible) components.
- CREDENCE represents each concept as a credal prediction (a probability interval), enabling uncertainty estimation grounded in the model’s own probabilistic outputs.
- Epistemic uncertainty is derived from disagreement across diverse concept heads, while aleatoric uncertainty is estimated using a dedicated ambiguity output trained to reflect annotator disagreement when available.
- The approach is designed to be actionable, supporting decision policies such as automating low-uncertainty cases, collecting more data for high-epistemic cases, routing high-aleatoric cases to human review, and abstaining when both are high.
- Experiments across multiple tasks show epistemic uncertainty correlates with prediction errors, while aleatoric uncertainty tracks annotator disagreement, providing signal beyond error-only relationships.
Related Articles
LLMs will be a commodity
Reddit r/artificial

What it feels like to have to have Qwen 3.6 or Gemma 4 running locally
Reddit r/LocalLLaMA

Dex lands $5.3M to grow its AI-driven talent matching platform
Tech.eu
AI Voice Agents in Production: What Actually Works in 2026
Dev.to
How we built a browser-based AI Pathology platform
Dev.to