VLMs Need Words: Vision Language Models Ignore Visual Detail In Favor of Semantic Anchors

arXiv cs.CL / 4/6/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that Vision Language Models (VLMs) underperform on fine-grained visual tasks because their training pipeline emphasizes mapping visual content into the text (language) space.
  • It claims this causes VLMs to only reason reliably about visual entities that can be linked to existing, nameable language concepts, while unnameable/novel visual entities lead to brittle or hallucinated textual descriptions.
  • Experiments on visual correspondence tasks show VLM accuracy is substantially higher for semantic, shape, and face matching when the relevant entities are nameable in language than when they are unnameable.
  • Logit Lens analysis supports a mechanism: the models assign clearer semantic labels and use more unique corresponding tokens for nameable entities.
  • The authors find that providing arbitrary names for unknown entities improves performance, but task-specific fine-tuning improves generalization even more without relying on language priors.

Abstract

Vision Language Models (VLMs) achieve impressive performance across a wide range of multimodal tasks. However, on some tasks that demand fine-grained visual perception, they often fail even when the required information is present in their internal representations. In this work, we demonstrate that this gap arises from their narrow training pipeline which focuses on moving visual information to the textual space. Consequently, VLMs can only reason about visual entities that can be mapped to known concepts in the language space, leaving vision-focused tasks such as visual correspondence and reasoning about novel visual entities poorly supported. As a result, VLMs are severely limited in several important multimodal capabilities because they rely on brittle, hallucinated textual descriptions of visual entities that they cannot map to textual representations. We verify this behavior through visual correspondence tasks, in which VLMs must detect matching entities between two images. Testing across semantic, shape, and face correspondence tasks, we find that VLMs perform much better when the relevant entities are nameable in language than when they are unnameable. Mechanistically, our Logit Lens analyses confirm that VLMs explicitly assign semantic labels to nameable entities and surface more unique corresponding tokens compared to unnameable entities. Furthermore, we show that teaching completely arbitrary names for unknown entities improves performance, yet task-specific finetuning yields even stronger generalization without relying on language priors. Our findings suggest that current VLM failures on visual tasks reflect learned shortcuts from their training, rather than a fundamental limitation of multimodal architectures.