Seeing the Evidence, Missing the Answer: Tool-Guided Vision-Language Models on Visual Illusions

arXiv cs.CV / 4/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • Vision-language models are shown to have a consistent bias toward treating classic optical illusions as “real,” even after counterfactual image modifications.
  • The paper proposes a tool-guided inference framework for the DataCV 2026 Challenge that fixes the illusion failure mode without retraining, by letting an off-the-shelf VLM use generic image-manipulation tools.
  • An illusion-type routing prompt determines which tools to call for different perceptual question categories, and each tool call generates an immutable image resource stored in a persistent registry for the model to reuse.
  • The approach demonstrates strong cross-structural generalization, maintaining performance on test sets with structurally unfamiliar illusion variants (e.g., rotated Mach Bands).
  • The authors report key open questions, including a likely data-driven positive-detection bias, a gap between pixel-level spatial reasoning and higher-level logical inference over generated annotations, and heightened sensitivity to compression artifacts.

Abstract

Vision-language models (VLMs) exhibit a systematic bias when confronted with classic optical illusions: they overwhelmingly predict the illusion as "real" regardless of whether the image has been counterfactually modified. We present a tool-guided inference framework for the DataCV 2026 Challenge (Tasks I and II) that addresses this failure mode without any model training. An off-the-shelf vision-language model is given access to a small set of generic image manipulation tools: line drawing, region cropping, side-by-side comparison, and channel isolation, together with an illusion-type-routing system prompt that prescribes which tools to invoke for each perceptual question category. Critically, every tool call produces a new, immutable image resource appended to a persistent registry, so the model can reference and compose any prior annotated view throughout its reasoning chain. Rather than hard-coding illusion-specific modules, this generic-tool-plus-routing design yields strong cross-structural generalization: performance remained consistent from the validation set to a test set containing structurally unfamiliar illusion variants (e.g., Mach Bands rotated from vertical to horizontal stacking). We further report three empirical observations that we believe warrant additional investigation: (i) a strong positive-detection bias likely rooted in imbalanced illusion training data, (ii) a striking dissociation between pixel-accurate spatial reasoning and logical inference over self-generated annotations, and (iii) pronounced sensitivity to image compression artifacts that compounds false positives.