LLM-as-Judge Framework for Evaluating Tone-Induced Hallucination in Vision-Language Models

arXiv cs.AI / 4/22/2026

💬 OpinionDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces Ghost-100, a new benchmark (800 synthetic images across eight categories and three vision-language task families) designed to study how hallucinations change under progressively coercive prompt tone.
  • It uses a 5-Level Prompt Intensity Framework that keeps the image and task fixed while varying only directive force, allowing “tone” to be isolated as the key independent variable.
  • The authors evaluate models with a dual-track approach: an H-Rate rule-based metric for when systems shift from grounded refusal to unsupported positive claims, and a GPT-4o-mini-judged H-Score (1–5) to quantify the confidence/specificity of fabrication.
  • A three-stage automated validation process verifies 717 of 800 images as strictly compliant with the negative-ground-truth design, and results show strong metric differences across model families and sometimes non-monotonic sensitivity peaking at intermediate tones.
  • Testing nine open-weight VLMs reveals that hallucination incidence and intensity can diverge and that reading-style vs presence-detection subsets respond differently to prompt pressure, which aggregate metrics may hide.

Abstract

Vision-Language Models (VLMs) are increasingly deployed in settings where reliable visual grounding carries operational consequences, yet their behavior under progressively coercive prompt phrasing remains undercharacterized. Existing hallucination benchmarks predominantly rely on neutral prompts and binary detection, leaving open how both the incidence and the intensity of fabrication respond to graded linguistic pressure across structurally distinct task types. We present Ghost-100, a procedurally constructed benchmark of 800 synthetically generated images spanning eight categories across three task families -- text-illegibility, time-reading, and object-absence -- each designed under a negative-ground-truth principle that guarantees the queried target is absent, illegible, or indeterminate by construction. Every image is paired with five prompts drawn from a structured 5-Level Prompt Intensity Framework, holding the image and task identity fixed while varying only directive force, so that tone is isolated as the sole independent variable. We adopt a dual-track evaluation protocol: a rule-based H-Rate measuring the proportion of responses in which a model crosses from grounded refusal into unsupported positive commitment, and a GPT-4o-mini-judged H-Score on a 1-5 scale characterizing the confidence and specificity of fabrication once it occurs. We additionally release a three-stage automated validation workflow, which retrospectively confirms 717 of 800 images as strictly compliant. Evaluating nine open-weight VLMs, we find that H-Rate and H-Score dissociate substantially across model families, reading-style and presence-detection subsets respond to prompt pressure in qualitatively different ways, and several models exhibit non-monotonic sensitivity peaking at intermediate tone levels -- patterns that aggregate metrics obscure.