Dream to Fly: Model-Based Reinforcement Learning for Vision-Based Drone Flight

arXiv cs.RO / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a model-based reinforcement learning (MBRL) approach for vision-only autonomous drone racing, mapping single-camera pixels directly to control commands to enable agile gate navigation.
  • By leveraging DreamerV3, the method aims to reduce sample inefficiency seen in model-free RL methods such as PPO and SAC, and to avoid reliance on heavy imitation-learning bootstrapping or handcrafted reward shaping.
  • Experiments demonstrate that a perception-aware behavior naturally emerges, where the drone actively steers its camera toward texture-rich gate regions without explicit reward terms for viewing direction.
  • The approach is validated in both simulation and real-world hardware-in-the-loop flight with rendered image observations, and is reported to run on real quadrotors at speeds up to about 9 m/s.
  • Overall, the work advances pixel-based autonomous flight and argues that MBRL is a promising route for improving real-world robotic learning performance in constrained, perception-driven tasks.

Abstract

Autonomous drone racing has risen as a challenging robotic benchmark for testing the limits of learning, perception, planning, and control. Expert human pilots are able to fly a drone through a race track by mapping pixels from a single camera directly to control commands. Recent works in autonomous drone racing attempting direct pixel-to-commands control policies have relied on either intermediate representations that simplify the observation space or performed extensive bootstrapping using Imitation Learning (IL). This paper leverages DreamerV3 to train visuomotor policies capable of agile flight through a racetrack using only pixels as observations. In contrast to model-free methods like PPO or SAC, which are sample-inefficient and struggle in this setting, our approach acquires drone racing skills from pixels. Notably, a perception-aware behaviour of actively steering the camera toward texture-rich gate regions emerges without the need of handcrafted reward terms for the viewing direction. Our experiments show in both, simulation and real-world flight using a hardware-in-the-loop setup with rendered image observations, how the proposed approach can be deployed on real quadrotors at speeds of up to 9 m/s. These results advance the state of pixel-based autonomous flight and demonstrate that MBRL offers a promising path for real-world robotics research.