Arbitration Failure, Not Perceptual Blindness: How Vision-Language Models Resolve Visual-Linguistic Conflicts

arXiv cs.CL / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether VLM errors in visual-linguistic conflicts stem from weak perception or from a mismanaged arbitration between image evidence and prior text knowledge.
  • Across ten VLMs, “failed” answers still retain strong linearly decodable visual evidence from early layers (AUC > 0.86) and have nearly identical encoding strength versus successful cases.
  • Layer-by-layer Multimodal Arbitration Crossover (MAC) and last-layer logit gaps are shown to be more predictive of grounding outcomes than the raw strength of visual encoding.
  • Causal testing via full-sequence activation patching finds that image tokens carry most causal impact (text tokens none) and that targeted, training-free activation steering in early layers can improve visual grounding by up to +3.8% in some settings.
  • The authors conclude that VLMs “already see well,” but the key failure mode is acting on what they see, and that targeted interventions can bridge this gap.

Abstract

When a Vision-Language Model (VLM) sees a blue banana and answers "yellow", is the problem of perception or arbitration? We explore the question in ten VLMs with various sizes and reveal an Encoding--Grounding Dissociation: models that fail to report what they see (and thus provide a wrong answer) still encode the visual evidence as strongly as models that provide the correct answer. Using Multimodal Arbitration Crossover (MAC) analysis with layer-by-layer Logit Lens probing, we track the competition between visual and prior signals across every layer of each model. We show that visual attributes can be linearly decodable from early layers (AUC > 0.86). The accuracy remains nearly identical for both successful and failed samples. However, the gap in the final-layer logit -- not the strength of encoding -- better predicts grounding outcomes with a correlation of . After having studied when VLMs base their answers on image clues rather than prior knowledge, we want to understand the causal relationships. We establish causality through full-sequence activation patching. The standard last-token interventions in LLM interpretability do not affect VLMs. In contrast, replacing the full token sequence at layers identified by MAC alters 60 to 84% of outputs. Partial-token decomposition shows that image tokens carry almost all of the causal impact, while text tokens have none. Scaling addresses the remaining architectural differences to achieve perfect retention. Moving from diagnosis to intervention, we show that training-free activation steering -- both linear and sparse autoencoder-guided -- in early layers can improve visual grounding by up to +3.8% with degrading performance in some setups. Overall, these findings lead to a clear conclusion: VLMs already see well, but the challenge is acting on what they see. Targeted interventions can help to bridge this gap.