LIE: LiDAR-only HD Map Construction with Intensity Enhancement via Online Knowledge Distillation

arXiv cs.CV / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents LIE, a LiDAR-only method for constructing HD semantic maps for autonomous driving by addressing the lack of dense semantic cues present in LiDAR data.
  • It uses an online knowledge distillation framework where a teacher branch fuses student LiDAR features with corresponding 2D intensity map tiles to provide dense supervision for map-element segmentation.
  • Experiments on nuScenes show LIE outperforms single-modality baselines and achieves an 8.2% higher mIoU than the best camera-based state-of-the-art model.
  • The method is reported to be robust at long ranges and in challenging weather and lighting, and it adapts to Argoverse2 with only 10% fine-tuning while beating camera-based models trained on the full dataset.
  • The authors state that the source code will be made available via the provided project link.

Abstract

Online High-Definition (HD) map construction is a key component of autonomous driving. Recent methods rely on multi-view camera images for cost-effective HD map segmentation, but cameras lack depth information for accurate scene geometry. In contrast, LiDAR provides precise 3D measurements but lacks dense semantic cues. In this work, we propose LIE, LiDAR-only semantic map construction method that employ Knowledge Distillation (KD) to handle the lack of dense semantic and texture cues. Specifically, the teacher branch fuses student LiDAR features and the corresponding 2D intensity map tile to provide dense supervision for segmenting map elements using online distillation scheme. Experimental results show that our method outperforms all single-modality approaches, achieving 8.2% higher mIoU than the state-of-the-art camera-based model on nuScenes. LIE is robust over long ranges and under challenging weather and lighting, and efficiently adapts to Argoverse2 with only 10% fine-tuning, surpassing camera-based models trained on the full dataset. Source code will be available \href{https://iv.ee.hm.edu/lie/}{here}.