Multi-Agent Reasoning with Consistency Verification Improves Uncertainty Calibration in Medical MCQA

arXiv cs.CL / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The study addresses the problem of miscalibrated confidence in clinical AI by proposing a multi-agent medical MCQA approach that improves uncertainty calibration for safe decision-making.
  • Four domain specialist agents (respiratory, cardiology, neurology, gastroenterology) generate independent answers using Qwen2.5-7B-Instruct, then each answer is checked via a two-phase self-verification process that outputs specialist confidence scores (S-scores).
  • S-score weighted fusion selects the final answer while calibrating the reported confidence, with calibration improvements measured using metrics like ECE.
  • Experiments on MedQA-USMLE and MedMCQA (including high-disagreement subsets) show ECE reductions of 49–74% across settings, while maintaining reasonable accuracy and improving AUROC in the MedQA-250 setting.
  • Ablation results indicate that Two-Phase Verification mainly drives calibration gains, whereas multi-agent reasoning contributes most to accuracy improvements.

Abstract

Miscalibrated confidence scores are a practical obstacle to deploying AI in clinical settings. A model that is always overconfident offers no useful signal for deferral. We present a multi-agent framework that combines domain-specific specialist agents with Two-Phase Verification and S-Score Weighted Fusion to improve both calibration and discrimination in medical multiple-choice question answering. Four specialist agents (respiratory, cardiology, neurology, gastroenterology) generate independent diagnoses using Qwen2.5-7B-Instruct. Each diagnosis is then subjected to a two-phase self-verification process that measures internal consistency and produces a Specialist Confidence Score (S-score). The S-scores drive a weighted fusion strategy that selects the final answer and calibrates the reported confidence. We evaluate across four experimental settings, covering 100-question and 250-question high-disagreement subsets of both MedQA-USMLE and MedMCQA. Calibration improvement is the central finding, with ECE reduced by 49-74% across all four settings, including the harder MedMCQA benchmark where these gains persist even when absolute accuracy is constrained by knowledge-intensive recall demands. On MedQA-250, the full system achieves ECE = 0.091 (74.4% reduction over the single-specialist baseline) and AUROC = 0.630 (+0.056) at 59.2% accuracy. Ablation analysis identifies Two-Phase Verification as the primary calibration driver and multi-agent reasoning as the primary accuracy driver. These results establish that consistency-based verification produces more reliable uncertainty estimates across diverse medical question types, providing a practical confidence signal for deferral in safety-critical clinical AI applications.