Benchmarking Deflection and Hallucination in Large Vision-Language Models

arXiv cs.AI / 4/15/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current vision-language model benchmarks miss key behaviors in retrieval-based QA, especially cases where visual and textual evidence conflict or retrieved knowledge is incomplete.
  • It introduces a dynamic data curation pipeline to keep benchmark difficulty from degrading over time as LVLMs improve and can answer more questions without retrieval.
  • It proposes VLM-DeflectionBench with 2,775 samples across diverse multimodal retrieval settings to test how models handle insufficient or misleading evidence by generating deflections.
  • The authors define a fine-grained evaluation protocol with four scenarios to separate parametric memorization from retrieval robustness.
  • Experiments on 20 state-of-the-art LVLMs show that models often fail to deflect when evidence is noisy or misleading, underscoring the need to measure “how they behave when they don’t know.”

Abstract

Large Vision-Language Models (LVLMs) increasingly rely on retrieval to answer knowledge-intensive multimodal questions. Existing benchmarks overlook conflicts between visual and textual evidence and the importance of generating deflections (e.g., Sorry, I cannot answer...) when retrieved knowledge is incomplete. These benchmarks also suffer from rapid obsolescence, as growing LVLM training sets allow models to answer many questions without retrieval. We address these gaps with three contributions. First, we propose a dynamic data curation pipeline that preserves benchmark difficulty over time by filtering for genuinely retrieval-dependent samples. Second, we introduce VLM-DeflectionBench, a benchmark of 2,775 samples spanning diverse multimodal retrieval settings, designed to probe model behaviour under conflicting or insufficient evidence. Third, we define a fine-grained evaluation protocol with four scenarios that disentangle parametric memorization from retrieval robustness. Experiments across 20 state-of-the-art LVLMs indicate that models usually fail to deflect in the presence of noisy or misleading evidence. Our results highlight the need to evaluate not only what models know, but how they behave when they do not, and serve as a reusable and extensible benchmark for reliable KB-VQA evaluation. All resources will be publicly available upon publication.