The Expense of Seeing: Attaining Trustworthy Multimodal Reasoning Within the Monolithic Paradigm

arXiv cs.CV / 4/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that today’s Vision-Language Models (VLMs) are not reliably synthesizing visual and language information as assumed, often relying on strong language priors to “skip” visual bottlenecks.
  • It claims current multimodal evaluation methods (e.g., ablations or new dataset creation) can’t separate dataset bias from true architectural inability, undermining trust in reported multimodal performance.
  • The authors propose the Modality Translation Protocol (an information-theoretic approach) to reveal how much “seeing” is actually happening, introducing metrics called Toll (ToS), Curse (CoS), and Fallacy (FoS) of Seeing.
  • They introduce the Semantic Sufficiency Criterion (SSC) and suggest a Divergence Law of Multimodal Scaling, predicting that scaling language components may worsen the penalty caused by visual bottlenecks.
  • The work challenges the KDD community to move beyond the goal of “multimodal gain” and use SSC as an active architectural blueprint for truly grounded multimodal reasoning.

Abstract

The rapid proliferation of Vision-Language Models (VLMs) is widely celebrated as the dawn of unified multimodal knowledge discovery but its foundation operates on a dangerous, unquestioned axiom: that current VLMs faithfully synthesise multimodal data. We argue they do not. Instead, a profound crisis of trustworthiness underlies the dominant Vision Encoder-Projector-LLM paradigm. Rather than extracting grounded knowledge from visual inputs, state-of-the-art models frequently exhibit functional blindness, i.e., exploiting strong language priors to bypass severe visual representation bottlenecks. In this work, we challenge the conventional methodology of multimodal evaluation, which relies on data ablation or new dataset creation and therefore fatally conflates dataset biases with architectural incapacity. We propose a radical, information-theoretic departure: the Modality Translation Protocol, designed to quantifiably unmask the Expense of Seeing. By translating semantic payloads rather than ablating them, we formulate three novel metrics -- the Toll (ToS), Curse (CoS), and Fallacy (FoS) of Seeing -- culminating in the Semantic Sufficiency Criterion (SSC). Furthermore, we posit a provocative Divergence Law of Multimodal Scaling, hypothesising that as the underlying language engines scale to unprecedented reasoning capabilities, the mathematical penalty of the visual knowledge bottleneck paradoxically increases. We challenge the KDD community to abandon the illusory pursuit of "multimodal gain". By elevating the SSC from a passive diagnostic constraint to an active architectural blueprint, we provide the rigorous, trustworthy foundation required to force the next generation of AI systems to truly see the data, achieving true multimodal reasoning.