Unleashing Spatial Reasoning in Multimodal Large Language Models via Textual Representation Guided Reasoning

arXiv cs.CL / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current multimodal large language models (MLLMs) often underperform on 3D spatial reasoning because they do not build structured abstractions of 3D scenes from video inputs.
  • It proposes TRACE, a prompting method that generates textual intermediate representations of allocentric (world-centered) 3D context from egocentric video, including camera trajectories, meta-context, and object entities.
  • The method is designed to guide MLLMs to reason over these text-based spatial traces when answering spatial questions more accurately.
  • Experiments on VSI-Bench and OST-Bench show consistent improvements over prior prompting approaches across multiple MLLM backbones and training/scale setups.
  • Ablation and bottleneck analyses are included to validate the design choices and clarify where 3D spatial reasoning limitations arise in MLLMs.

Abstract

Existing Multimodal Large Language Models (MLLMs) struggle with 3D spatial reasoning, as they fail to construct structured abstractions of the 3D environment depicted in video inputs. To bridge this gap, drawing inspiration from cognitive theories of allocentric spatial reasoning, we investigate how to enable MLLMs to model and reason over text-based spatial representations of video. Specifically, we introduce Textual Representation of Allocentric Context from Egocentric Video (TRACE), a prompting method that induces MLLMs to generate text-based representations of 3D environments as intermediate reasoning traces for more accurate spatial question answering. TRACE encodes meta-context, camera trajectories, and detailed object entities to support structured spatial reasoning over egocentric videos. Extensive experiments on VSI-Bench and OST-Bench demonstrate that TRACE yields notable and consistent improvements over prior prompting strategies across a diverse range of MLLM backbones, spanning different parameter scales and training schemas. We further present ablation studies to validate our design choices, along with detailed analyses that probe the bottlenecks of 3D spatial reasoning in MLLMs.