Cog3DMap: Multi-View Vision-Language Reasoning with 3D Cognitive Maps

arXiv cs.CV / 3/25/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current multimodal LLM approaches struggle with precise spatial understanding because visual tokens are mainly semantic and do not provide explicit geometric grounding.
  • It introduces Cog3DMap, a framework that repeatedly builds an explicit 3D cognitive map from multi-view images, with tokens that include both semantic and geometric information tied to 3D space.
  • Rather than asking the MLLM to implicitly reconstruct 3D structure from augmented cues, Cog3DMap lets the model reason directly over a spatially structured 3D map.
  • The method is reported to achieve state-of-the-art results on multiple spatial reasoning benchmarks.
  • The authors state that the code will be made publicly available, supporting reproducibility and downstream experimentation.

Abstract

Precise spatial understanding from multi-view images remains a fundamental challenge for Multimodal Large Language Models (MLLMs), as their visual representations are predominantly semantic and lack explicit geometric grounding. While existing approaches augment visual tokens with geometric cues from visual geometry models, their MLLM is still required to implicitly infer the underlying 3D structure of the scene from these augmented tokens, limiting its spatial reasoning capability. To address this issue, we introduce Cog3DMap, a framework that recurrently constructs an explicit 3D memory from multi-view images, where each token is grounded in 3D space and possesses both semantic and geometric information. By feeding these tokens into the MLLM, our framework enables direct reasoning over a spatially structured 3D map, achieving state-of-the-art performance on various spatial reasoning benchmarks. Code will be made publicly available.