DO-Bench: An Attributable Benchmark for Diagnosing Object Hallucination in Vision-Language Models
arXiv cs.CV / 4/28/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces DO-Bench, a controlled benchmark designed to diagnose object hallucination in vision-language models (VLMs), especially in binary object existence verification.
- DO-Bench isolates error sources by using multimodal interventions across two dimensions: Prior Override (increasing textual contextual priors while keeping visual evidence fixed) and Perception-Limited (increasing visual evidence granularity from full scenes to localized object crops).
- Two diagnostic metrics—PriorRobust and PerceptionAbility—are proposed to consistently quantify how strongly models rely on priors versus how well they ground objects perceptually.
- Experiments across multiple open- and closed-source VLMs show systematic, mechanism-dependent differences in prior sensitivity and perceptual reliability, indicating that object hallucination is not uniform across models.
- The authors argue that attributing failures to specific mechanisms provides clearer insight than aggregate accuracy alone and can guide more targeted reliability improvements for VLMs.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Same Agent, Different Risk | How Microsoft 365 Copilot Grounding Changes the Security Model | Rahsi Framework™
Dev.to

Claude Haiku for Low-Cost AI Inference: Patterns from a Horse Racing Prediction System
Dev.to

How We Built an Ambient AI Clinical Documentation Pipeline (and Saved Doctors 8+ Hours a Week)
Dev.to

🦀 PicoClaw Deep Dive — A Field Guide to Building an Ultra-Light AI Agent in Go 🐹
Dev.to