Deterministic Hallucination Detection in Medical VQA via Confidence-Evidence Bayesian Gain

arXiv cs.AI / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses hallucinations in medical multimodal VQA systems, where models may produce answers that contradict the input image and could be unsafe for clinical use.
  • It argues that hallucinated responses leave a detectable signature in the model’s own token-level log-probabilities, specifically inconsistent confidence and low sensitivity to visual evidence.
  • It introduces Confidence-Evidence Bayesian Gain (CEBaG), a deterministic, self-contained hallucination detection approach that avoids stochastic sampling and external natural language inference models.
  • Across four medical MLLMs and three VQA benchmarks (16 settings), CEBaG achieves the best AUC in 13/16 settings and improves over Vision-Amplified Semantic Entropy (VASE) by an average of 8 AUC points.
  • The authors report no task-specific hyperparameters are required and plan to release code after acceptance.

Abstract

Multimodal large language models (MLLMs) have shown strong potential for medical Visual Question Answering (VQA), yet they remain prone to hallucinations, defined as generating responses that contradict the input image, posing serious risks in clinical settings. Current hallucination detection methods, such as Semantic Entropy (SE) and Vision-Amplified Semantic Entropy (VASE), require 10 to 20 stochastic generations per sample together with an external natural language inference model for semantic clustering, making them computationally expensive and difficult to deploy in practice. We observe that hallucinated responses exhibit a distinctive signature directly in the model's own log-probabilities: inconsistent token-level confidence and weak sensitivity to visual evidence. Based on this observation, we propose Confidence-Evidence Bayesian Gain (CEBaG), a deterministic hallucination detection method that requires no stochastic sampling, no external models, and no task-specific hyperparameters. CEBaG combines two complementary signals: token-level predictive variance, which captures inconsistent confidence across response tokens, and evidence magnitude, which measures how much the image shifts per-token predictions relative to text-only inference. Evaluated across four medical MLLMs and three VQA benchmarks (16 experimental settings), CEBaG achieves the highest AUC in 13 of 16 settings and improves over VASE by 8 AUC points on average, while being fully deterministic and self-contained. The code will be made available upon acceptance.