LAST: Leveraging Tools as Hints to Enhance Spatial Reasoning for Multimodal Large Language Models

arXiv cs.CV / 4/14/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that multimodal LLMs often mis-handle complex geometric layouts due to hallucinations and imprecision, and suggests using specialized vision tools to supply structured spatial priors.
  • It introduces LAST, a tool-augmented spatial reasoning framework that wraps heterogeneous, parameter-rich tool calls into atomic instructions and reusable “spatial skills.”
  • LAST uses an extensible interactive sandbox (LAST-Box) that converts low-level tool outputs (e.g., segmentation masks, depth maps) into LLM-consumable multimodal hints such as annotated images and textual descriptions.
  • A three-stage progressive training strategy is proposed to help models learn to interpret tool outputs and then become proficient at invoking tools adaptively.
  • Experiments across four datasets report that LAST-7B delivers ~20% gains over its backbone and performs competitively against strong proprietary closed-source LLMs on complex spatial reasoning tasks.

Abstract

Spatial reasoning is a cornerstone capability for intelligent systems to perceive and interact with the physical world. However, multimodal large language models (MLLMs) frequently suffer from hallucinations and imprecision when parsing complex geometric layouts. As data-driven scaling struggles to internalize structured geometric priors and spatial constraints, integrating mature, specialized vision models presents a compelling alternative. Despite its promise, applying this paradigm to spatial reasoning is hindered by two key challenges: The difficulty of invoking heterogeneous, parameter-rich tools, as well as the challenge of understanding and effectively leveraging their diverse low-level outputs (e.g., segmentation masks, depth maps) in high-level reasoning. To address these challenges, we propose LAST, a unified framework for tool-augmented spatial reasoning. LAST features an extensible interactive sandbox, termed LAST-Box, which abstracts heterogeneous tool invocations into atomic instructions and reusable spatial skills, returning multimodal hints (e.g., annotated images and textual descriptions) that can be directly consumed by LLMs. We further design a three-stage progressive training strategy that guides models from understanding tool outputs to proficient and adaptive tool invocation. Experiments on four datasets show that LAST-7B achieves around 20\% performance gains over its backbone and outperforms strong proprietary closed-source LLMs, substantially enhancing reasoning on complex spatial tasks.