Multiple Consistent 2D-3D Mappings for Robust Zero-Shot 3D Visual Grounding

arXiv cs.CV / 4/30/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces MCM-VG, a new framework for robust zero-shot 3D visual grounding that addresses issues caused by low-quality open-vocabulary 3D proposals.
  • MCM-VG improves reliability by enforcing multiple consistent 2D-3D mappings using three components: semantic alignment (LLM-driven query parsing and coarse-to-fine matching), instance rectification (VLM-guided 2D segmentations for reconstructing missing targets and accurate 3D geometry), and viewpoint distillation (clustering camera directions to reduce redundant multi-view reasoning).
  • The method formulates final target disambiguation as a multiple-choice reasoning task for vision-language models by pairing selected RGB frames with bird’s-eye-view maps as compact visual prompts.
  • Experiments on ScanRefer and Nr3D show state-of-the-art performance, achieving 62.0% Acc@0.25 and 53.6% Acc@0.5 on ScanRefer, with gains of 6.4% and 4.0% over prior baselines.
  • Overall, the work advances open-world embodied AI by enabling more precise and dependable zero-shot localization and reasoning in 3D environments.

Abstract

Zero-shot 3D Visual Grounding (3DVG) is a critical capability for open-world embodied AI. However, existing methods are fundamentally bottlenecked by the poor quality of open-vocabulary 3D proposals, suffering from inaccurate categories and imprecise geometries, as well as the spatial redundancy of exhaustive multi-view reasoning. To address these challenges, we propose MCM-VG, a novel framework that achieves robust zero-shot 3DVG by explicitly establishing Multiple Consistent 2D-3D Mappings. Instead of passively relying on noisy 3D segments, MCM-VG enforces 2D-3D consistency across three fundamental dimensions to achieve precise target localization and reliable reasoning. First, a Semantic Alignment module corrects category mismatches via LLM-driven query parsing and coarse-to-fine 2D-3D matching. Second, an Instance Rectification module leverages VLM-guided 2D segmentations to reconstruct missing targets, back-projecting these reliable visual priors to establish accurate 3D geometries. Finally, to eliminate spatial redundancy, a Viewpoint Distillation module clusters 3D camera directions to extract optimal frames. By pairing these optimal RGB frames with Bird's Eye View maps into concise visual prompt sets, we formulate the final target disambiguation as a multiple-choice reasoning task for Vision-Language Models. Extensive evaluations on ScanRefer and Nr3D benchmarks demonstrate that MCM-VG sets a new state-of-the-art for zero-shot 3D visual grounding. Remarkably, it achieves 62.0\% and 53.6\% in Acc@0.25 and Acc@0.5 on ScanRefer, outperforming previous baselines by substantial margins of 6.4\% and 4.0\%.