A Deployable Embodied Vision-Language Navigation System with Hierarchical Cognition and Context-Aware Exploration

arXiv cs.RO / 4/24/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper addresses a core robotics challenge: enabling embodied vision-language navigation to perform strong reasoning while still meeting real-time constraints on computation, memory, energy, and hardware limits.
  • It introduces a deployable system that separates the pipeline into three asynchronous modules: real-time perception, memory integration for spatial-semantic aggregation, and a reasoning module for high-level decisions.
  • The method incrementally builds a hierarchical “cognitive memory graph,” which is decomposed into subgraphs so a vision-language model can reason effectively over accumulated scene information.
  • To improve both navigation efficiency and accuracy, the approach reformulates exploration as a context-aware Weighted Traveling Repairman Problem (WTRP) that reduces the weighted waiting time of viewpoints.
  • Experiments in simulation and on real-world robotic platforms show higher navigation success and efficiency than prior VLN methods while preserving real-time operation on resource-constrained devices.

Abstract

Bridging the gap between embodied intelligence and embedded deployment remains a key challenge in intelligent robotic systems, where perception, reasoning, and planning must operate under strict constraints on computation, memory, energy, and real-time execution. In vision-language navigation (VLN), existing approaches often face a fundamental trade-off between strong reasoning capabilities and efficient deployment on real-world platforms. In this paper, we present a deployable embodied VLN system that achieves both high efficiency and robust high-level reasoning on real-world robotic platforms. To achieve this, we decouple the system into three asynchronous modules: a real-time perception module for continuous environment sensing, a memory integration module for spatial-semantic aggregation, and a reasoning module for high-level decision making. We incrementally construct a cognitive memory graph to encode scene information, which is further decomposed into subgraphs to enable reasoning with a vision-language model (VLM). To further improve navigation efficiency and accuracy, we also leverage the cognitive memory graph to formulate the exploration problem as a context-aware Weighted Traveling Repairman Problem (WTRP), which minimizes the weighted waiting time of viewpoints. Extensive experiments in both simulation and real-world robotic platforms demonstrate improved navigation success and efficiency over existing VLN approaches, while maintaining real-time performance on resource-constrained hardware.