DyGeoVLN: Infusing Dynamic Geometry Foundation Model into Vision-Language Navigation

arXiv cs.RO / 3/24/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces DyGeoVLN, a vision-language navigation framework designed to handle dynamic, real-world environments where prior methods assume static scenes and fail to generalize.
  • DyGeoVLN “infuses” a dynamic geometry foundation model into VLN via cross-branch feature fusion to support explicit 3D spatial representation and visual-semantic reasoning.
  • To improve long-horizon efficiency under motion and dynamics, it proposes a pose-free, adaptive-resolution token-pruning strategy that removes spatio-temporal redundant tokens and lowers inference cost.
  • Experiments reportedly achieve state-of-the-art results on multiple benchmarks and show strong robustness in real-world settings.

Abstract

Vision-language Navigation (VLN) requires an agent to understand visual observations and language instructions to navigate in unseen environments. Most existing approaches rely on static scene assumptions and struggle to generalize in dynamic, real-world scenarios. To address this challenge, we propose DyGeoVLN, a dynamic geometry-aware VLN framework. Our method infuses a dynamic geometry foundation model into the VLN framework through cross-branch feature fusion to enable explicit 3D spatial representation and visual-semantic reasoning. To efficiently compress historical token information in long-horizon, dynamic navigation, we further introduce a novel pose-free and adaptive-resolution token-pruning strategy. This strategy can remove spatio-temporal redundant tokens to reduce inference cost. Extensive experiments demonstrate that our approach achieves state-of-the-art performance on multiple benchmarks and exhibits strong robustness in real-world environments.