Complementing Self-Consistency with Cross-Model Disagreement for Uncertainty Quantification
arXiv cs.AI / 4/21/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper shows that self-consistency-based aleatoric uncertainty (AU) can fail when LLMs are overconfident and repeatedly produce the same incorrect answer across samples.
- It finds that cross-model semantic disagreement is higher on incorrect answers specifically when AU is low, suggesting a complementary signal for uncertainty.
- The authors propose an epistemic uncertainty (EU) method for black-box settings that uses only generated text from a small, scale-matched model ensemble and measures a similarity gap between inter-model and intra-model semantic scores.
- By defining total uncertainty (TU) as AU + EU, the method improves ranking calibration and selective abstention across multiple instruction-tuned models and long-form tasks, and better flags confident failures.
- The study also analyzes when EU is most effective using agreement and complementarity diagnostics, indicating practical conditions for applying the approach.
Related Articles

¿Hasta qué punto podría la IA reemplazarnos en nuestros trabajos? A veces creo que la gente exagera un poco.
Reddit r/artificial

Magnificent irony as Meta staff unhappy about running surveillance software on work PCs
The Register

ETHENEA (ETHENEA Americas LLC) Analyst View: Asset Allocation Resilience in the 2026 Global Macro Cycle
Dev.to

DEEPX and Hyundai Are Building Generative AI Robots
Dev.to

Stop Paying OpenAI to Read Garbage: The Two-Stage Agent Pipeline
Dev.to