AI Navigate

GAP-MLLM: Geometry-Aligned Pre-training for Activating 3D Spatial Perception in Multimodal Large Language Models

arXiv cs.CV / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • GAP-MLLM proposes Geometry-Aligned Pre-training to activate 3D geometric representations in multimodal LLMs, addressing limitations of image-only inputs.
  • The authors argue the remaining gap in 3D perception is due to misalignment in the training paradigm, not a lack of geometric priors.
  • It introduces a visual-prompted joint task forcing MLLMs to predict sparse pointmaps alongside semantic labels to enforce geometric awareness.
  • It includes a multi-level progressive fusion module with token-level gating to adaptively fuse geometric priors without suppressing semantic reasoning.
  • Experiments show improved geometric feature fusion and performance gains across 3D visual grounding, 3D dense captioning, and 3D video object detection.

Abstract

Multimodal Large Language Models (MLLMs) demonstrate exceptional semantic reasoning but struggle with 3D spatial perception when restricted to pure RGB inputs. Despite leveraging implicit geometric priors from 3D reconstruction models, image-based methods still exhibit a notable performance gap compared to methods using explicit 3D data. We argue that this gap does not arise from insufficient geometric priors, but from a misalignment in the training paradigm: text-dominated fine-tuning fails to activate geometric representations within MLLMs. Existing approaches typically resort to naive feature concatenation and optimize directly for downstream tasks without geometry-specific supervision, leading to suboptimal structural utilization. To address this limitation, we propose GAP-MLLM, a Geometry-Aligned Pre-training paradigm that explicitly activates structural perception before downstream adaptation. Specifically, we introduce a visual-prompted joint task that compels the MLLMs to predict sparse pointmaps alongside semantic labels, thereby enforcing geometric awareness. Furthermore, we design a multi-level progressive fusion module with a token-level gating mechanism, enabling adaptive integration of geometric priors without suppressing semantic reasoning. Extensive experiments demonstrate that GAP-MLLM significantly enhances geometric feature fusion and consistently enhances performance across 3D visual grounding, 3D dense captioning, and 3D video object detection tasks.