AI Navigate

EmergeNav: Structured Embodied Inference for Zero-Shot Vision-and-Language Navigation in Continuous Environments

arXiv cs.CV / 3/19/2026

📰 NewsModels & Research

Key Points

  • EmergeNav is a zero-shot framework for continuous vision-and-language navigation (VLN-CE) that uses structured embodied inference instead of relying on task-specific training or explicit maps.
  • The model introduces a Plan–Solve–Transition hierarchy for stage-structured execution, GIPE for goal-conditioned perceptual extraction, contrastive dual-memory reasoning for progress grounding, and Dual-FOV sensing for time-aligned local control and boundary verification.
  • It achieves strong zero-shot performance on VLN-CE, reporting 30.00 SR with Qwen3-VL-8B and 37.00 SR with Qwen3-VL-32B, using only open-source VLM backbones and no task-specific training.
  • The results suggest that explicit execution structure is a key ingredient for turning vision-language model priors into stable embodied navigation behavior, without relying on explicit maps, graph search, or waypoint predictors.

Abstract

Zero-shot vision-and-language navigation in continuous environments (VLN-CE) remains challenging for modern vision-language models (VLMs). Although these models encode useful semantic priors, their open-ended reasoning does not directly translate into stable long-horizon embodied execution. We argue that the key bottleneck is not missing knowledge alone, but missing an execution structure for organizing instruction following, perceptual grounding, temporal progress, and stage verification. We propose EmergeNav, a zero-shot framework that formulates continuous VLN as structured embodied inference. EmergeNav combines a Plan--Solve--Transition hierarchy for stage-structured execution, GIPE for goal-conditioned perceptual extraction, contrastive dual-memory reasoning for progress grounding, and role-separated Dual-FOV sensing for time-aligned local control and boundary verification. On VLN-CE, EmergeNav achieves strong zero-shot performance using only open-source VLM backbones and no task-specific training, explicit maps, graph search, or waypoint predictors, reaching 30.00 SR with Qwen3-VL-8B and 37.00 SR with Qwen3-VL-32B. These results suggest that explicit execution structure is a key ingredient for turning VLM priors into stable embodied navigation behavior.