Council Mode: Mitigating Hallucination and Bias in LLMs via Multi-Agent Consensus

arXiv cs.CL / 4/6/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces “Council Mode,” a multi-agent consensus framework that mitigates LLM hallucinations and bias by querying multiple heterogeneous frontier models and synthesizing their outputs with a dedicated consensus model.
  • Council Mode is implemented in three phases: a triage classifier for routing by complexity, parallel generation across architecturally diverse LLMs, and structured synthesis that highlights agreement, disagreement, and unique findings.
  • The authors provide a mathematical formulation of the consensus mechanism and describe the overall system architecture, including an open-source AI workspace implementation.
  • Across multiple benchmarks, Council Mode reports a 35.9% relative reduction in hallucination rates on HaluEval and a 7.8-point improvement on TruthfulQA over the best single model, while also lowering bias variance across domains.
  • The study includes extensive empirical results with benchmark comparisons and ablation studies to validate each component’s contribution.

Abstract

Large Language Models (LLMs), particularly those employing Mixture-of-Experts (MoE) architectures, have achieved remarkable capabilities across diverse natural language processing tasks. However, these models frequently suffer from hallucinations -- generating plausible but factually incorrect content -- and exhibit systematic biases that are amplified by uneven expert activation during inference. In this paper, we propose the Council Mode, a novel multi-agent consensus framework that addresses these limitations by dispatching queries to multiple heterogeneous frontier LLMs in parallel and synthesizing their outputs through a dedicated consensus model. The Council pipeline operates in three phases: (1) an intelligent triage classifier that routes queries based on complexity, (2) parallel expert generation across architecturally diverse models, and (3) a structured consensus synthesis that explicitly identifies agreement, disagreement, and unique findings before producing the final response. We implement and evaluate this architecture within an open-source AI workspace. Our comprehensive evaluation across multiple benchmarks demonstrates that the Council Mode achieves a 35.9% relative reduction in hallucination rates on the HaluEval benchmark and a 7.8-point improvement on TruthfulQA compared to the best-performing individual model, while maintaining significantly lower bias variance across domains. We provide the mathematical formulation of the consensus mechanism, detail the system architecture, and present extensive empirical results with ablation studies.