AI Navigate

Beyond Medical Diagnostics: How Medical Multimodal Large Language Models Think in Space

arXiv cs.CV / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • SpatialMed is introduced as the first comprehensive benchmark for evaluating 3D spatial intelligence in medical multimodal LLMs, comprising nearly 10K question-answer pairs across multiple organs and tumor types.
  • The authors propose an agentic pipeline that autonomously synthesizes spatial VQA data by orchestrating computational tools such as volume and distance calculators with multi-agent collaboration and expert radiologist validation.
  • Evaluations across 14 state-of-the-art medical MLLMs reveal that current models lack robust 3D spatial reasoning capabilities for medical imaging.
  • The work highlights a critical gap in 3D spatial reasoning and underscores the need for new datasets and evaluation methods to drive progress in medical AI.

Abstract

Visual spatial intelligence is critical for medical image interpretation, yet remains largely unexplored in Multimodal Large Language Models (MLLMs) for 3D imaging. This gap persists due to a systemic lack of datasets featuring structured 3D spatial annotations beyond basic labels. In this study, we introduce an agentic pipeline that autonomously synthesizes spatial visual question-answering (VQA) data by orchestrating computational tools such as volume and distance calculators with multi-agent collaboration and expert radiologist validation. We present SpatialMed, the first comprehensive benchmark for evaluating 3D spatial intelligence in medical MLLMs, comprising nearly 10K question-answer pairs across multiple organs and tumor types. Our evaluations on 14 state-of-the-art MLLMs and extensive analyses reveal that current models lack robust spatial reasoning capabilities for medical imaging.