Multimodal Language Models Cannot Spot Spatial Inconsistencies

arXiv cs.CV / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that multimodal large language models (MLLMs) remain weak at detecting 3D geometric and spatial inconsistencies across multiple views of the same scene.
  • It introduces a new, harder evaluation task: identifying which object violates 3D motion consistency when given two views.
  • The authors propose a scalable way to generate realistic image pairs that are spatially inconsistent using multi-view scenes, enabling systematic testing.
  • Experimental results show that state-of-the-art MLLMs significantly lag behind human observers and vary widely depending on scene attributes.
  • The findings suggest MLLMs have a fragile and incomplete grasp of 3D structure, motivating more physically grounded approaches.

Abstract

Spatial consistency is a fundamental property of the visual world and a key requirement for models that aim to understand physical reality. Despite recent advances, multimodal large language models (MLLMs) often struggle to reason about 3D geometry across multiple views. Rather than asking models to describe scene attributes, we introduce a more challenging task: given two views of the same scene, identify the object that violates 3D motion consistency. We propose a simple and scalable method for generating realistic, spatially inconsistent image pairs from multi-view scenes, enabling systematic evaluation of this capability. Our results show that state-of-the-art MLLMs significantly underperform human observers and exhibit substantial variability across different scene attributes, revealing a fragile and incomplete understanding of 3D structure. We hope our findings underscore the need for approaches that develop a more deeply grounded understanding of the physical world.

Multimodal Language Models Cannot Spot Spatial Inconsistencies | AI Navigate