Overconfidence and Calibration in Medical VQA: Empirical Findings and Hallucination-Aware Mitigation

arXiv cs.LG / 4/6/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper reports a systematic empirical study of confidence calibration and overconfidence in medical vision-language models (VLMs) across multiple architectures (Qwen3-VL, InternVL3, LLaVA-NeXT), model scales (2B–38B), confidence prompting strategies, and three medical VQA benchmarks.
  • It finds that overconfidence persists across model families and is not eliminated by scaling or common confidence-related prompting methods (e.g., chain-of-thought and verbalized confidence variants).
  • Post-hoc calibration methods such as Platt scaling significantly reduce calibration error and outperform prompt-based confidence estimation approaches.
  • The study shows that because post-hoc calibration methods are strictly monotonic, they do not improve AUROC (discriminative ranking quality), which remains unchanged.
  • It introduces hallucination-aware calibration (HAC) that uses vision-grounded hallucination detection signals to refine confidence estimates, improving both calibration and AUROC—especially for open-ended questions—supporting the use of calibrated confidence (augmented by hallucination signals) for more reliable medical VQA deployment.

Abstract

As vision-language models (VLMs) are increasingly deployed in clinical decision support, more than accuracy is required: knowing when to trust their predictions is equally critical. Yet, a comprehensive and systematic investigation into the overconfidence of these models remains notably scarce in the medical domain. We address this gap through a comprehensive empirical study of confidence calibration in VLMs, spanning three model families (Qwen3-VL, InternVL3, LLaVA-NeXT), three model scales (2B--38B), and multiple confidence estimation prompting strategies, across three medical visual question answering (VQA) benchmarks. Our study yields three key findings: First, overconfidence persists across model families and is not resolved by scaling or prompting, such as chain-of-thought and verbalized confidence variants. Second, simple post-hoc calibration approaches, such as Platt scaling, reduce calibration error and consistently outperform the prompt-based strategy. Third, due to their (strict) monotonicity, these post-hoc calibration methods are inherently limited in improving the discriminative quality of predictions, leaving AUROC at the same level. Motivated by these findings, we investigate hallucination-aware calibration (HAC), which incorporates vision-grounded hallucination detection signals as complementary inputs to refine confidence estimates. We find that leveraging these hallucination signals improves both calibration and AUROC, with the largest gains on open-ended questions. Overall, our findings suggest post-hoc calibration as standard practice for medical VLM deployment over raw confidence estimates, and highlight the practical usefulness of hallucination signals to enable more reliable use of VLMs in medical VQA.