More Than Meets the Eye: Measuring the Semiotic Gap in Vision-Language Models via Semantic Anchorage

arXiv cs.CL / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces DIVA, a controlled benchmark for vision-language models that replaces photorealistic detail with schematic/iconic visuals to test how visual fidelity affects idiomatic compositionality.
  • It proposes the Semantic Alignment Gap (Δ), an architecture-agnostic metric that measures the divergence in visual grounding between literal and idiomatic interpretations.
  • The authors add a directional signed bias b(t) to separately quantify whether models prefer literal readings and with what strength.
  • Evaluating eight recent VLMs, the study finds a consistent Literal Superiority Bias and shows that simply increasing model scale does not eliminate literal preference.
  • The results indicate that higher visual fidelity can reduce symbolic alignment, implying that hyper-realistic imagery may cognitively interfere with meaning grounding, and that abstraction plus semantic anchoring is beneficial.

Abstract

Vision-Language Models (VLMs) excel at photorealistic generation, yet often struggle to represent abstract meaning such as idiomatic interpretations of noun compounds. To study whether high visual fidelity interferes with idiomatic compositionality under visual abstraction, we introduce DIVA, a controlled benchmark that replaces high-fidelity visual detail with schematic iconicity by generating paired, sense-anchored visualizations for literal and idiomatic readings. We further propose Semantic Alignment Gap (\Delta), an architecture-agnostic metric that quantifies divergence between literal and idiomatic visual grounding. We additionally introduce a directional signed bias b(t) to separately measure the direction and strength of literal preference. Evaluating 8 recent VLMs, we reveal a consistent Literal Superiority Bias: model scale alone does not resolve literal preference, and increased visual fidelity is associated with weaker symbolic alignment, suggesting cognitive interference from hyper-realistic imagery. Our findings suggest that improving compositional understanding requires iconographic abstraction of visual input and anchoring interpretation and generation in intended meaning.