AI Navigate

Real-Time Monocular Scene Analysis for UAV in Outdoor Environments

arXiv cs.CV / 3/17/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Co-SemDepth is a real-time monocular depth estimation and semantic mapping architecture for UAVs in low-altitude outdoor environments, leveraging a new TopAir synthetic dataset to address limited annotated data.
  • The study finds Co-SemDepth excels in depth estimation while TaskPrompter offers strong semantic segmentation, indicating complementary strengths under synthetic-to-real evaluation.
  • It investigates synthetic-to-real domain adaptation using style-transfer techniques, concluding diffusion-based style transfer more effectively narrows the domain gap than Cycle-GANs for aerial imagery.
  • The work extends to marine-domain experiments with MidSea data, reporting good generalization on real SMD data and highlighting remaining challenges on MIT data.

Abstract

In this thesis, we leverage monocular cameras on aerial robots to predict depth and semantic maps in low-altitude unstructured environments. We propose a joint deep-learning architecture, named Co-SemDepth, that can perform the two tasks accurately and rapidly, and validate its effectiveness on a variety of datasets. The training of neural networks requires an abundance of annotated data, and in the UAV field, the availability of such data is limited. We introduce a new synthetic dataset in this thesis, TopAir that contains images captured with a nadir view in outdoor environments at different altitudes, helping to fill the gap. While using synthetic data for the training is convenient, it raises issues when shifting to the real domain for testing. We conduct an extensive analytical study to assess the effect of several factors on the synthetic-to-real generalization. Co-SemDepth and TaskPrompter models are used for comparison in this study. The results reveal a superior generalization performance for Co-SemDepth in depth estimation and for TaskPrompter in semantic segmentation. Also, our analysis allows us to determine which training datasets lead to a better generalization. Moreover, to help attenuate the gap between the synthetic and real domains, image style transfer techniques are explored on aerial images to convert from the synthetic to the realistic style. Cycle-GAN and Diffusion models are employed. The results reveal that diffusion models are better in the synthetic to real style transfer. In the end, we focus on the marine domain and address its challenges. Co-SemDepth is trained on a collected synthetic marine data, called MidSea, and tested on both synthetic and real data. The results reveal good generalization performance of Co-SemDepth when tested on real data from the SMD dataset while further enhancement is needed on the MIT dataset.