HTNav: A Hybrid Navigation Framework with Tiered Structure for Urban Aerial Vision-and-Language Navigation

arXiv cs.RO / 4/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces HTNav, a new hybrid vision-and-language navigation framework designed for aerial (urban) navigation in complex environments.
  • It combines imitation learning (IL) and reinforcement learning (RL) using a staged training strategy to keep the core navigation behavior stable while improving exploration.
  • HTNav uses a tiered decision-making mechanism to coordinate macro-level route planning with fine-grained action control.
  • It adds a map representation learning module to better capture spatial continuity when operating in open domains.
  • On the CityNav benchmark, the authors report state-of-the-art results across scene levels and difficulty levels, with improved precision and robustness.

Abstract

Inspired by the general Vision-and-Language Navigation (VLN) task, aerial VLN has attracted widespread attention, owing to its significant practical value in applications such as logistics delivery and urban inspection. However, existing methods face several challenges in complex urban environments, including insufficient generalization to unseen scenes, suboptimal performance in long-range path planning, and inadequate understanding of spatial continuity. To address these challenges, we propose HTNav, a new collaborative navigation framework that integrates Imitation Learning (IL) and Reinforcement Learning (RL) within a hybrid IL-RL framework. This framework adopts a staged training mechanism to ensure the stability of the basic navigation strategy while enhancing its environmental exploration capability. By integrating a tiered decision-making mechanism, it achieves collaborative interaction between macro-level path planning and fine-grained action control. Furthermore, a map representation learning module is introduced to deepen its understanding of spatial continuity in open domains. On the CityNav benchmark, our method achieves state-of-the-art performance across all scene levels and task difficulties. Experimental results demonstrate that this framework significantly improves navigation precision and robustness in complex urban environments.