Beyond a Single Frame: Multi-Frame Spatially Grounded Reasoning Across Volumetric MRI

arXiv cs.CV / 4/20/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper argues that medical vision-language models (VLMs) often lack transparent, spatially grounded reasoning, and existing benchmarks typically rely on single 2D images instead of volumetric clinical scans.
  • It introduces SGMRI-VQA, a new 41,307-question benchmark for multi-frame, spatially grounded reasoning on volumetric MRI, built from expert radiologist annotations in the fastMRI+ dataset (brain and knee).
  • Each QA example includes clinician-aligned reasoning traces and frame-indexed bounding box coordinates, with tasks spanning hierarchical steps such as detection, localization, counting/classification, and captioning.
  • Experiments on 10 VLMs show that supervised fine-tuning of Qwen3-VL-8B with bounding-box supervision improves grounding performance compared with strong zero-shot baselines, suggesting spatial supervision helps achieve more grounded clinical reasoning.

Abstract

Spatial reasoning and visual grounding are core capabilities for vision-language models (VLMs), yet most medical VLMs produce predictions without transparent reasoning or spatial evidence. Existing benchmarks also evaluate VLMs on isolated 2D images, overlooking the volumetric nature of clinical imaging, where findings can span multiple frames or appear on only a few slices. We introduce Spatially Grounded MRI Visual Question Answering (SGMRI-VQA), a 41,307-pair benchmark for multi-frame, spatially grounded reasoning on volumetric MRI. Built from expert radiologist annotations in the fastMRI+ dataset across brain and knee studies, each QA pair includes a clinician-aligned chain-of-thought trace with frame-indexed bounding box coordinates. Tasks are organized hierarchically across detection, localization, counting/classification, and captioning, requiring models to jointly reason about what is present, where it is, and across which frames it extends. We benchmark 10 VLMs and show that supervised fine-tuning of Qwen3-VL-8B with bounding box supervision consistently improves grounding performance over strong zero-shot baselines, indicating that targeted spatial supervision is an effective path toward grounded clinical reasoning.