Beyond Shortcuts: Mitigating Visual Illusions in Frozen VLMs via Qualitative Reasoning

arXiv cs.CV / 4/30/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that frozen vision-language models (VLMs) are especially brittle against optical illusions due to shortcut heuristics that favor linguistic priors and memorized prototypes over direct visual evidence.
  • It introduces Structured Qualitative Inference (SQI), a training-free, inference-time framework that applies qualitative constraints to improve visual grounding without fine-tuning.
  • SQI uses three modules—Axiomatic Constraint Injection, Hierarchical Scene Decomposition, and Counterfactual Self-Verification—to reduce quantitative hallucinations, separate target signals from background distractors, and counter confirmation bias.
  • Experiments on the DataCV 2026 Challenge (Task I: Classic Illusion Understanding) show SQI ranked 2nd overall and improved accuracy across illusion categories.
  • The authors report better diagnostic interpretability compared with baselines, highlighting structured qualitative grounding as a promising approach for building more illusion-resistant VLMs.

Abstract

While Vision-Language Models (VLMs) have achieved state-of-the-art performance in general visual tasks, their perceptual robustness remains remarkably brittle when confronted with optical illusions. These failures are often attributed to shortcut heuristics, where models prioritize linguistic priors and memorized prototypes over direct visual evidence. In this work, we propose Structured Qualitative Inference (SQI), a training-free, data-centric framework designed to fortify visual grounding in frozen VLMs. SQI addresses perceptual anomalies through three systematic modules: (1) Axiomatic Constraint Injection, which suppresses erroneous metric estimations and quantitative hallucinations; (2) Hierarchical Scene Decomposition, which decouples target visual manifolds from complex background distractors; and (3) Counterfactual Self-Verification, an adversarial reasoning step that mitigates confirmation bias. By orchestrating these qualitative constraints at inference time, SQI effectively aligns high-level linguistic reasoning with low-level visual perception. Our framework was evaluated on the DataCV 2026 Challenge (Task I: Classic Illusion Understanding), where it ranked 2nd place overall. Experimental results demonstrate that SQI not only significantly enhances accuracy across diverse illusion categories but also provides superior diagnostic interpretability without any model fine-tuning. Our success underscores the potential of structured qualitative grounding as a robust paradigm for developing next-generation, illusion-resistant vision-language systems.