DO-Bench: An Attributable Benchmark for Diagnosing Object Hallucination in Vision-Language Models

arXiv cs.CV / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces DO-Bench, a controlled benchmark designed to diagnose object hallucination in vision-language models (VLMs), especially in binary object existence verification.
  • DO-Bench isolates error sources by using multimodal interventions across two dimensions: Prior Override (increasing textual contextual priors while keeping visual evidence fixed) and Perception-Limited (increasing visual evidence granularity from full scenes to localized object crops).
  • Two diagnostic metrics—PriorRobust and PerceptionAbility—are proposed to consistently quantify how strongly models rely on priors versus how well they ground objects perceptually.
  • Experiments across multiple open- and closed-source VLMs show systematic, mechanism-dependent differences in prior sensitivity and perceptual reliability, indicating that object hallucination is not uniform across models.
  • The authors argue that attributing failures to specific mechanisms provides clearer insight than aggregate accuracy alone and can guide more targeted reliability improvements for VLMs.

Abstract

Object level hallucination remains a central reliability challenge for vision language models (VLMs), particularly in binary object existence verification. Existing benchmarks emphasize aggregate accuracy but rarely disentangle whether errors stem from perceptual limitations or from the influence of contextual textual priors, leaving underlying failure mechanisms ambiguous. We introduce DO-Bench, a controlled diagnostic benchmark that isolates these sources through structured multimodal interventions. Rather than evaluating models in unconstrained settings, DO-Bench probes two complementary dimensions: the Prior Override dimension progressively strengthens contextual textual priors while holding visual evidence constant to assess resistance to prior pressure, and the Perception-Limited dimension incrementally enhances visual evidence from full-scene context to localized object crops to measure perceptual grounding strength. This paired design enables attribution of errors to prior suppression, perceptual insufficiency, or their interaction. We further define two diagnostic metrics, PriorRobust and PerceptionAbility, to quantify these behaviors consistently. Evaluations across diverse open- and closed-source VLMs reveal systematic differences in prior sensitivity and perceptual reliability, demonstrating that object hallucination reflects heterogeneous, mechanism dependent failure patterns beyond aggregate accuracy.