AI Navigate

GeoSense: Internalizing Geometric Necessity Perception for Multimodal Reasoning

arXiv cs.CV / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper GeoSense introduces an independent geometry input channel and alignment training to enable MLLMs to effectively use geometric features when 2D cues are insufficient.
  • It further endows the model with perceptual awareness through a spatial-aware supervised fine-tuning dataset that activates latent cues about the necessity of geometric information.
  • Experiments across multiple spatial reasoning benchmarks demonstrate significant spatial gains without compromising 2D visual reasoning capabilities.
  • The work aims to enable more robust, efficient, and self-aware multimodal intelligence in multimodal language models.

Abstract

Advancing towards artificial superintelligence requires rich and intelligent perceptual capabilities. A critical frontier in this pursuit is overcoming the limited spatial understanding of Multimodal Large Language Models (MLLMs), where geometry information is essential. Existing methods often address this by rigidly injecting geometric signals into every input, while ignoring their necessity and adding computation overhead. Contrary to this paradigm, our framework endows the model with an awareness of perceptual insufficiency, empowering it to autonomously engage geometric features in reasoning when 2D cues are deemed insufficient. To achieve this, we first introduce an independent geometry input channel to the model architecture and conduct alignment training, enabling the effective utilization of geometric features. Subsequently, to endow the model with perceptual awareness, we curate a dedicated spatial-aware supervised fine-tuning dataset. This serves to activate the model's latent internal cues, empowering it to autonomously determine the necessity of geometric information. Experiments across multiple spatial reasoning benchmarks validate this approach, demonstrating significant spatial gains without compromising 2D visual reasoning capabilities, offering a path toward more robust, efficient and self-aware multi-modal intelligence.