MultihopSpatial: Multi-hop Compositional Spatial Reasoning Benchmark for Vision-Language Model

arXiv cs.CV / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • MultihopSpatial introduces a benchmark for multi-hop and compositional spatial reasoning in vision-language models, covering 1- to 3-hop queries across diverse spatial perspectives.
  • It defines Acc@50IoU, a joint metric requiring correct answer selection and precise bounding-box grounding to reflect real-world VLA performance.
  • A dedicated MultihopSpatial-Train corpus is released to support large-scale training for spatial intelligence in VLMs.
  • Experiments on 37 state-of-the-art VLMs reveal that compositional spatial reasoning remains challenging, but reinforcement learning post-training on the corpus improves both intrinsic spatial reasoning and downstream embodied manipulation performance.

Abstract

Spatial reasoning is foundational for Vision-Language Models (VLMs), particularly when deployed as Vision-Language-Action (VLA) agents in physical environments. However, existing benchmarks predominantly focus on elementary, single-hop relations, neglecting the multi-hop compositional reasoning and precise visual grounding essential for real-world scenarios. To address this, we introduce MultihopSpatial, offering three key contributions: (1) A comprehensive benchmark designed for multi-hop and compositional spatial reasoning, featuring 1- to 3-hop complex queries across diverse spatial perspectives. (2) Acc@50IoU, a complementary metric that simultaneously evaluates reasoning and visual grounding by requiring both answer selection and precise bounding box prediction - capabilities vital for robust VLA deployment. (3) MultihopSpatial-Train, a dedicated large-scale training corpus to foster spatial intelligence. Extensive evaluation of 37 state-of-the-art VLMs yields eight key insights, revealing that compositional spatial reasoning remains a formidable challenge. Finally, we demonstrate that reinforcement learning post-training on our corpus enhances both intrinsic VLM spatial reasoning and downstream embodied manipulation performance.