Auditing Frontier Vision-Language Models for Trustworthy Medical VQA: Grounding Failures, Format Collapse, and Domain Adaptation

arXiv cs.AI / 5/1/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper audits five recent frontier/grounding-aware vision-language models on Medical VQA, finding uniformly weak localization of anatomical and pathological targets (best mean IoU is only 0.23) along with clinically risky laterality confusion.
  • In a two-step self-grounding pipeline (localize first, then answer with the same model), VQA accuracy drops for every model due to both inaccurate localization and severe format-compliance/parsing failures.
  • When the system replaces predicted bounding boxes with ground-truth annotations, VQA accuracy recovers and improves, indicating the core failure is in the perception/localization module rather than the question-answer decomposition strategy.
  • As a follow-up for domain adaptation, supervised fine-tuning of Qwen 2.5 VL on combined Med-VQA data yields the best reported SLAKE open-ended recall (85.5%) among comparable methods, but whether this fully fixes the trustworthiness bottleneck remains open.
  • Overall, the study identifies grounding quality (bounding-box localization reliability) as a primary bottleneck for trustworthy clinical deployment of VLMs under realistic failure conditions.

Abstract

Deploying vision-language models (VLMs) in clinical settings demands auditable behavior under realistic failure conditions, yet the failure landscape of frontier VLMs on specialized medical inputs is poorly characterized. We audit five recent frontier and grounding-aware VLMs (Gemini~2.5~Pro, GPT-5, o3, GLM-4.5V, Qwen~2.5~VL) on Medical VQA along two trust-relevant axes. Perception: all models localize anatomical and pathological targets poorly -- the best model reaches only 0.23 mean IoU and 19.1% Acc@0.5 -- and exhibit clinically dangerous laterality confusion. Pipeline integration: a self-grounding pipeline, where the same model localizes then answers, degrades VQA accuracy for every model -- driven by both inaccurate localization and format-compliance failures under the two-step prompt (parse failure rises to 70%--99% for Gemini and GPT-5 on VQA-RAD). Replacing predicted boxes with ground-truth annotations recovers and improves VQA accuracy, consistent with the failure residing in the perception module rather than in the decomposition itself. These observational findings identify grounding quality as a primary trustworthiness bottleneck in our SLAKE bounding-box setting. As a complementary fine-tuning follow-up, supervised fine-tuning of Qwen~2.5~VL on combined Med-VQA training data attains the highest reported SLAKE open-ended recall (85.5%) among comparable methods, suggesting that the VQA-level gap is tractable with domain adaptation; whether this also closes the perception/trustworthiness bottleneck is left to future work.