Do Vision-Language Models Truly Perform Vision Reasoning? A Rigorous Study of the Modality Gap

arXiv cs.CL / 4/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The study questions whether vision-language models (VLMs) truly perform vision-grounded reasoning or instead rely mainly on their text-based reasoning capabilities.
  • It introduces CrossMath, a controlled multimodal benchmark that presents identical, task-relevant information in text-only, image-only, and image+text formats to isolate modality-specific effects.
  • Experiments across state-of-the-art VLMs show a consistent modality gap, where performance is strong for text-only inputs but often degrades when visual information is added (image+text).
  • The results suggest that current VLM reasoning occurs primarily in the textual space with limited use of visual evidence.
  • Fine-tuning VLMs on a curated CrossMath training set improves reasoning performance across modalities and provides solid gains on two general visual reasoning tasks, with code released on GitHub.

Abstract

Reasoning in vision-language models (VLMs) has recently attracted significant attention due to its broad applicability across diverse downstream tasks. However, it remains unclear whether the superior performance of VLMs stems from genuine vision-grounded reasoning or relies predominantly on the reasoning capabilities of their textual backbones. To systematically measure this, we introduce CrossMath, a novel multimodal reasoning benchmark designed for controlled cross-modal comparisons. Specifically, we construct each problem in text-only, image-only, and image+text formats guaranteeing identical task-relevant information, verified by human annotators. This rigorous alignment effectively isolates modality-specific reasoning differences while eliminating confounding factors such as information mismatch. Extensive evaluation of state-of-the-art VLMs reveals a consistent phenomenon: a substantial performance gap between textual and visual reasoning. Notably, VLMs excel with text-only inputs, whereas incorporating visual data (image+text) frequently degrades performance compared to the text-only baseline. These findings indicate that current VLMs conduct reasoning primarily in the textual space, with limited genuine reliance on visual evidence. To mitigate this limitation, we curate a CrossMath training set for VLM fine-tuning. Empirical evaluations demonstrate that fine-tuning on this training set significantly boosts reasoning performance across all individual and joint modalities, while yielding robust gains on two general visual reasoning tasks. Source code is available at https://github.com/xuyige/CrossMath.