SpatialAnt: Autonomous Zero-Shot Robot Navigation via Active Scene Reconstruction and Visual Anticipation

arXiv cs.RO / 3/31/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • SpatialAnt is proposed as a zero-shot vision-and-language navigation framework that targets real-world failure modes of existing methods that depend on high-quality human-crafted scene reconstructions.
  • The approach improves monocular-based self-reconstruction by adding a physical grounding strategy to recover absolute metric scale, reducing scale ambiguity in learned priors.
  • Instead of treating noisy self-reconstructed scenes as reliable spatial references, SpatialAnt uses visual anticipation to render future observations from noisy point clouds and perform counterfactual reasoning to reject paths that conflict with instructions.
  • Experiments on both simulated and real-world settings show substantial gains over prior zero-shot methods, reaching 66% Success Rate on R2R-CE and 50.8% on RxR-CE.
  • A physical deployment on a Hello Robot validates practical effectiveness with a reported 52% Success Rate in challenging real-world environments.

Abstract

Vision-and-Language Navigation (VLN) has recently benefited from Multimodal Large Language Models (MLLMs), enabling zero-shot navigation. While recent exploration-based zero-shot methods have shown promising results by leveraging global scene priors, they rely on high-quality human-crafted scene reconstructions, which are impractical for real-world robot deployment. When encountering an unseen environment, a robot should build its own priors through pre-exploration. However, these self-built reconstructions are inevitably incomplete and noisy, which severely degrade methods that depend on high-quality scene reconstructions. To address these issues, we propose SpatialAnt, a zero-shot navigation framework designed to bridge the gap between imperfect self-reconstructions and robust execution. SpatialAnt introduces a physical grounding strategy to recover the absolute metric scale for monocular-based reconstructions. Furthermore, rather than treating the noisy self-reconstructed scenes as absolute spatial references, we propose a novel visual anticipation mechanism. This mechanism leverages the noisy point clouds to render future observations, enabling the agent to perform counterfactual reasoning and prune paths that contradict human instructions. Extensive experiments in both simulated and real-world environments demonstrate that SpatialAnt significantly outperforms existing zero-shot methods. We achieve a 66% Success Rate (SR) on R2R-CE and 50.8% SR on RxR-CE benchmarks. Physical deployment on a Hello Robot further confirms the efficiency and efficacy of our framework, achieving a 52% SR in challenging real-world settings.