VGS-Decoding: Visual Grounding Score Guided Decoding for Hallucination Mitigation in Medical VLMs

arXiv cs.CV / 3/24/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Medical vision-language models can hallucinate in clinically risky ways because they rely on language priors instead of visual evidence during generation.
  • The paper introduces Visual Grounding Score Guided Decoding (VGS-Decoding), a training-free inference method that reweights token probabilities using a per-token Visual Grounding Score (VGS).
  • VGS estimates how visually dependent each generated token is by comparing token probability behavior under original versus distorted images.
  • Decoding amplifies visually grounded tokens and suppresses hallucinated ones, offering per-token adaptive control without fixed-weight contrastive tuning.
  • Experiments on MIMIC-Diff-VQA and VQA-RAD with models including LLaVA-Med, CheXagent, and MedGemma show consistent improvements (up to +9.12% overall gain) with only ~2× inference overhead and no extra training; code will be released upon acceptance.

Abstract

Medical Vision-Language Models (VLMs) often hallucinate by generating responses based on language priors rather than visual evidence, posing risks in clinical applications. We propose Visual Grounding Score Guided Decoding (VGS-Decoding), a training-free method to mitigate hallucinations during inference. Our key insight is that hallucinated tokens maintain or increase their probability when visual information is degraded, while visually grounded tokens decrease in probability. We introduce the Visual Grounding Score (VGS), which measures each token's visual dependency by comparing distributions from original and distorted images. During decoding, we reweight probabilities by amplifying visually grounded tokens while suppressing hallucinations. Unlike fixed-weight contrastive methods, VGS-Decoding provides per-token adaptive control. Experiments on MIMIC-Diff-VQA and VQA-RAD across LLaVA-Med, CheXagent, and MedGemma demonstrate consistent improvements, with up to +9.12% overall gain and +8.98\% in open-ended recall, while introducing only 2\times inference overhead and no additional training, making it practical for clinical deployment. Upon acceptance, code will be released publicly to facilitate reproducibility.