Source-Modality Monitoring in Vision-Language Models

arXiv cs.CL / 4/27/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces “source-modality monitoring,” where multimodal models can track and communicate which input source specific information comes from (e.g., linking the word “image” in a prompt to the actual image input).
  • It frames source-modality monitoring as a case of the broader “binding problem,” examining how models associate words to particular components of their multimodal input and context.
  • Experiments across 11 vision-language models on target-modality information retrieval show that both syntactic and semantic cues matter, but semantic signals often dominate when modalities are distributionally distinct.
  • The authors discuss how these mechanisms affect model robustness and implications for future multimodal agentic systems that must reliably track and use different input modalities.

Abstract

We define and investigate source-modality monitoring -- the ability of multimodal models to track and communicate the input source from which pieces of information originate. We consider source-modality monitoring as an instance of the more general binding problem, and evaluate the extent to which models exploit syntactic vs. semantic signals in order to bind words like image in a user-provided prompt to specific components of their input and context (i.e., actual images). Across experiments spanning 11 vision-language models (VLMs) performing target-modality information retrieval tasks, we find that both syntactic and semantic signals play an important role, but that the latter tend to outweigh the former in cases when modalities are highly distinct distributionally. We discuss the implications of these findings for model robustness, and in the context of increasingly multimodal agentic systems.