When Vision-Language Models Judge Without Seeing: Exposing Informativeness Bias

arXiv cs.AI / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that vision-language models used as judges often rely on answer “informativeness” rather than actually attending to image content, reducing evaluation reliability.
  • It introduces a flaw called “informativeness bias,” where judges can select answers that appear internally richer even when those answers conflict with what the image shows.
  • The authors propose BIRCH, a two-step judging paradigm that first corrects candidate answers for inconsistencies with the image and then compares candidates against this corrected, image-grounded anchor.
  • Experiments across multiple models and benchmarks show BIRCH can reduce informativeness bias by up to 17% and improve judge-related performance by up to 9.8%.
  • The work claims current VLM-as-a-Judge systems overlook a fundamental design issue and calls for more principled, image-faithful evaluation methods.

Abstract

The reliability of VLM-as-a-Judge is critical for the automatic evaluation of vision-language models (VLMs). Despite recent progress, our analysis reveals that VLM-as-a-Judge often pays limited attention to the image when making decisions. Instead, they often blindly favor the more informative answer, even when they can recognize it conflicts with the image content. We call this problem informativeness bias, which significantly undermines judge reliability. To address it, we propose BIRCH (Balanced Informativeness and CoRrectness with a Truthful AnCHor), a judging paradigm that first corrects inconsistencies with the image content in candidate answers, and then compares the answers against this corrected version. This shifts the judge's focus from informativeness to image-grounded correctness. Experiments on multiple models and benchmarks show that BIRCH reduces informativeness bias by up to 17%, resulting in performance gains of up to 9.8%. Our work reveals an overlooked but fundamental flaw in current VLM-as-a-Judge systems and highlights the need for more principled designs.