AI Navigate

VIEW2SPACE: Studying Multi-View Visual Reasoning from Sparse Observations

arXiv cs.CV / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • They introduce VIEW2SPACE, a benchmark for sparse multi-view reasoning built on diverse, high-fidelity 3D scenes with precise per-view metadata, enabling scalable data generation transferable to real-world settings.
  • The study shows current vision-language and spatial models achieve only marginal gains above random on multi-view reasoning tasks, indicating a largely unsolved problem.
  • The authors propose Grounded Chain-of-Thought with Visual Evidence, which substantially improves performance at moderate difficulty and generalizes better across datasets than existing approaches.
  • Through difficulty-aware scaling analyses across model size, data scale, reasoning depth, and visibility constraints, they find geometric perception benefits from scaling under good visibility, but deep compositional reasoning across sparse views remains a fundamental challenge.

Abstract

Multi-view visual reasoning is essential for intelligent systems that must understand complex environments from sparse and discrete viewpoints, yet existing research has largely focused on single-image or temporally dense video settings. In real-world scenarios, reasoning across views requires integrating partial observations without explicit guidance, while collecting large-scale multi-view data with accurate geometric and semantic annotations remains challenging. To address this gap, we leverage physically grounded simulation to construct diverse, high-fidelity 3D scenes with precise per-view metadata, enabling scalable data generation that remains transferable to real-world settings. Based on this engine, we introduce VIEW2SPACE, a multi-dimensional benchmark for sparse multi-view reasoning, together with a scalable, disjoint training split supporting millions of grounded question-answer pairs. Using this benchmark, a comprehensive evaluation of state-of-the-art vision-language and spatial models reveals that multi-view reasoning remains largely unsolved, with most models performing only marginally above random guessing. We further investigate whether training can bridge this gap. Our proposed Grounded Chain-of-Thought with Visual Evidence substantially improves performance under moderate difficulty, and generalizes to real-world data, outperforming existing approaches in cross-dataset evaluation. We further conduct difficulty-aware scaling analyses across model size, data scale, reasoning depth, and visibility constraints, indicating that while geometric perception can benefit from scaling under sufficient visibility, deep compositional reasoning across sparse views remains a fundamental challenge.