CoE: Collaborative Entropy for Uncertainty Quantification in Agentic Multi-LLM Systems
arXiv cs.AI / 3/31/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper identifies a limitation in current uncertainty estimation for multi-LLM systems: most methods measure uncertainty within individual models but fail to capture semantic disagreement across models in the collaboration.
- It introduces Collaborative Entropy (CoE), a unified information-theoretic metric defined over a shared semantic cluster space that combines intra-model semantic entropy with inter-model divergence to the ensemble mean.
- CoE is positioned as a system-level uncertainty measure (not a weighted ensemble predictor) designed to quantify collaborative confidence and disagreement among multiple LLMs.
- The authors analyze key theoretical properties of CoE, including non-negativity and zero uncertainty under perfect semantic consensus, and study behavior in edge cases like per-model collapse to delta distributions.
- Experiments on TriviaQA and SQuAD using LLaMA-3.1-8B-Instruct, Qwen-2.5-7B-Instruct, and Mistral-7B-Instruct show CoE improves uncertainty estimation versus standard entropy/divergence baselines, with larger gains as more heterogeneous models are added, and demonstrates a training-free CoE-guided coordination heuristic.
Related Articles

Black Hat Asia
AI Business
[D] How does distributed proof of work computing handle the coordination needs of neural network training?
Reddit r/MachineLearning

Claude Code's Entire Source Code Was Just Leaked via npm Source Maps — Here's What's Inside
Dev.to

BYOK is not just a pricing model: why it changes AI product trust
Dev.to

AI Citation Registries and Identity Persistence Across Records
Dev.to