Learning Vision-Based Omnidirectional Navigation: A Teacher-Student Approach Using Monocular Depth Estimation

arXiv cs.RO / 4/30/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The paper addresses the limitation of 2D LiDAR for obstacle avoidance by proposing a vision-based omnidirectional navigation approach that can perceive obstacles above or below the scan plane.
  • It uses a teacher-student framework where a PPO-trained teacher policy in NVIDIA Isaac Lab leverages privileged 2D LiDAR data, and a student policy is distilled to operate using only monocular depth maps.
  • The student relies on monocular depth estimation produced by a fine-tuned Depth Anything V2 model using four RGB cameras, eliminating the need for LiDAR sensors at inference time.
  • The system runs fully onboard on an NVIDIA Jetson Orin AGX mounted on a DJI RoboMaster, with an end-to-end pipeline spanning depth estimation, policy execution, and motor control.
  • Experiments show higher success rates in simulation (82–96.5% vs 50–89% for the 2D LiDAR teacher) and improved real-world performance, especially for challenging 3D obstacle geometries outside the LiDAR scan plane.

Abstract

Reliable obstacle avoidance in industrial settings demands 3D scene understanding, but widely used 2D LiDAR sensors perceive only a single horizontal slice of the environment, missing critical obstacles above or below the scan plane. We present a teacher-student framework for vision-based mobile robot navigation that eliminates the need for LiDAR sensors. A teacher policy trained via Proximal Policy Optimization (PPO) in NVIDIA Isaac Lab leverages privileged 2D LiDAR observations that account for the full robot footprint to learn robust navigation. The learned behavior is distilled into a student policy that relies solely on monocular depth maps predicted by a fine-tuned Depth Anything V2 model from four RGB cameras. The complete inference pipeline, comprising monocular depth estimation (MDE), policy execution, and motor control, runs entirely onboard an NVIDIA Jetson Orin AGX mounted on a DJI RoboMaster platform, requiring no external computation for inference. In simulation, the student achieves success rates of 82-96.5%, consistently outperforming the standard 2D LiDAR teacher (50-89%). In real-world experiments, the MDE-based student outperforms the 2D LiDAR teacher when navigating around obstacles with complex 3D geometries, such as overhanging structures and low-profile objects, that fall outside the single scan plane of a 2D LiDAR.