AI Navigate

ESPIRE: A Diagnostic Benchmark for Embodied Spatial Reasoning of Vision-Language Models

arXiv cs.CV / 3/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • ESPIRE is introduced as a new diagnostic benchmark for embodied spatial reasoning in vision-language models (VLMs).
  • It provides a simulated world that physically grounds VLMs and evaluates them on spatial-reasoning-centric robotic tasks, linking evaluation to real-world deployment.
  • Tasks are decomposed into localization and execution and framed as generative problems, contrasting with traditional discriminative VQA approaches.
  • The benchmark enables fine-grained analysis from passive spatial reasoning to action-oriented reasoning, with coverage at both the instruction and environment levels.
  • The work uses ESPIRE to diagnose frontier VLMs and offers in-depth analysis of their spatial reasoning behaviors.

Abstract

A recent trend in vision-language models (VLMs) has been to enhance their spatial cognition for embodied domains. Despite progress, existing evaluations have been limited both in paradigm and in coverage, hindering rapid, iterative model development. To address these limitations, we propose ESPIRE, a diagnostic benchmark for embodied spatial reasoning. ESPIRE offers a simulated world that physically grounds VLMs and evaluates them on spatial-reasoning-centric robotic tasks, thus narrowing the gap between evaluation and real-world deployment. To adapt VLMs to robotic tasks, we decompose each task into localization and execution, and frame both as generative problems, in stark contrast to predominant discriminative evaluations (e.g., via visual-question answering) that rely on distractors and discard execution. This decomposition further enables a fine-grained analysis beyond passive spatial reasoning toward reasoning to act. We systematically design ESPIRE both at the instruction level and at the environment level, ensuring broad coverage of spatial reasoning scenarios. We use ESPIRE to diagnose a range of frontier VLMs and provide in-depth analysis of their spatial reasoning behaviors.