Chain-of-Thought Degrades Visual Spatial Reasoning Capabilities of Multimodal LLMs

arXiv cs.CV / 4/20/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The study finds that chain-of-thought (CoT) prompting harms multimodal reasoning models’ performance on generalized visual spatial reasoning tasks.
  • By evaluating seventeen models across thirteen spatial benchmarks, the authors identify a consistent performance degradation specifically tied to CoT prompting.
  • A No-Image++ ablation shows the models engage in severe shortcut learning and produce hallucinated visual details derived from textual priors even when images are removed.
  • The results challenge the effectiveness of text-only CoT approaches for spatial reasoning and argue for vision-centric reasoning paradigms.

Abstract

Multimodal Reasoning Models (MRMs) leveraging Chain-of-Thought (CoT) based thinking have revolutionized mathematical and logical problem-solving. However, we show that this paradigm struggles with generalized spatial intelligence. We perform a comprehensive evaluation of seventeen models across thirteen spatial benchmarks and identify a critical gap: CoT prompting consistently degrades performance in visual spatial reasoning. Furthermore, through a novel No-Image++ ablation, we demonstrate that MRMs and CoT prompted MLMs suffer from severe shortcut learning, and hallucinate visual details from textual priors even when the image is absent. These findings challenge the efficacy of text-only CoT for spatial tasks and underscore the need for vision-centric reasoning paradigms.