Exploring Spatial Intelligence from a Generative Perspective

arXiv cs.CV / 4/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether modern generative/unified multimodal models have generative spatial intelligence (GSI), meaning they can honor and manipulate 3D spatial constraints during image generation.
  • It proposes GSI-Bench, the first benchmark to measure GSI via spatially grounded image editing, combining a real-world dataset (GSI-Real) and a synthetic dataset (GSI-Syn).
  • GSI-Real is created using a 3D-prior-guided generation and filtering pipeline, while GSI-Syn offers controllable spatial operations with automated labeling.
  • The authors introduce a unified evaluation protocol to enable scalable, model-agnostic assessment of spatial compliance and image-editing fidelity.
  • Experiments show that fine-tuning unified multimodal models on GSI-Syn improves performance on both synthetic and real tasks and can even enhance downstream spatial understanding, indicating generative training can strengthen spatial reasoning.

Abstract

Spatial intelligence is essential for multimodal large language models, yet current benchmarks largely assess it only from an understanding perspective. We ask whether modern generative or unified multimodal models also possess generative spatial intelligence (GSI), the ability to respect and manipulate 3D spatial constraints during image generation, and whether such capability can be measured or improved. We introduce GSI-Bench, the first benchmark designed to quantify GSI through spatially grounded image editing. It consists of two complementary components: GSI-Real, a high-quality real-world dataset built via a 3D-prior-guided generation and filtering pipeline, and GSI-Syn, a large-scale synthetic benchmark with controllable spatial operations and fully automated labeling. Together with a unified evaluation protocol, GSI-Bench enables scalable, model-agnostic assessment of spatial compliance and editing fidelity. Experiments show that fine-tuning unified multimodal models on GSI-Syn yields substantial gains on both synthetic and real tasks and, strikingly, also improves downstream spatial understanding. This provides the first clear evidence that generative training can tangibly strengthen spatial reasoning, establishing a new pathway for advancing spatial intelligence in multimodal models.