Arbitration Failure, Not Perceptual Blindness: How Vision-Language Models Resolve Visual-Linguistic Conflicts
arXiv cs.CL / 4/13/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates whether VLM errors in visual-linguistic conflicts stem from weak perception or from a mismanaged arbitration between image evidence and prior text knowledge.
- Across ten VLMs, “failed” answers still retain strong linearly decodable visual evidence from early layers (AUC > 0.86) and have nearly identical encoding strength versus successful cases.
- Layer-by-layer Multimodal Arbitration Crossover (MAC) and last-layer logit gaps are shown to be more predictive of grounding outcomes than the raw strength of visual encoding.
- Causal testing via full-sequence activation patching finds that image tokens carry most causal impact (text tokens none) and that targeted, training-free activation steering in early layers can improve visual grounding by up to +3.8% in some settings.
- The authors conclude that VLMs “already see well,” but the key failure mode is acting on what they see, and that targeted interventions can bridge this gap.
Related Articles

Black Hat Asia
AI Business

Apple is building smart glasses without a display to serve as an AI wearable
THE DECODER

Why Fashion Trend Prediction Isn’t Enough Without Generative AI
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
Chatbot vs Voicebot: The Real Business Decision Nobody Talks About
Dev.to