Vision-and-Language Navigation for UAVs: Progress, Challenges, and a Research Roadmap

arXiv cs.RO / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper surveys Vision-and-Language Navigation for UAVs (UAV-VLN), framing it as a core embodied AI problem: interpreting high-level language instructions and completing long-horizon tasks in complex 3D environments.
  • It proposes a methodological taxonomy covering the evolution from early modular and deep-learning pipelines to modern agentic approaches using large foundation models, including VLMs, VLA models, and generative world-model integrations.
  • It reviews the research ecosystem—especially simulators, datasets, and evaluation metrics—aimed at standardizing progress and comparisons across studies.
  • It identifies key blockers for real-world deployment, including the simulation-to-reality gap, perception robustness in dynamic outdoor conditions, handling linguistic ambiguity during reasoning, and running large models efficiently on constrained onboard hardware.
  • It concludes with a research roadmap emphasizing future directions such as multi-agent swarm coordination and air–ground collaborative robotics.

Abstract

Vision-and-Language Navigation for Unmanned Aerial Vehicles (UAV-VLN) represents a pivotal challenge in embodied artificial intelligence, focused on enabling UAVs to interpret high-level human commands and execute long-horizon tasks in complex 3D environments. This paper provides a comprehensive and structured survey of the field, from its formal task definition to the current state of the art. We establish a methodological taxonomy that charts the technological evolution from early modular and deep learning approaches to contemporary agentic systems driven by large foundation models, including Vision-Language Models (VLMs), Vision-Language-Action (VLA) models, and the emerging integration of generative world models with VLA architectures for physically-grounded reasoning. The survey systematically reviews the ecosystem of essential resources simulators, datasets, and evaluation metrics that facilitates standardized research. Furthermore, we conduct a critical analysis of the primary challenges impeding real-world deployment: the simulation-to-reality gap, robust perception in dynamic outdoor settings, reasoning with linguistic ambiguity, and the efficient deployment of large models on resource-constrained hardware. By synthesizing current benchmarks and limitations, this survey concludes by proposing a forward-looking research roadmap to guide future inquiry into key frontiers such as multi-agent swarm coordination and air-ground collaborative robotics.