PanDA: Unsupervised Domain Adaptation for Multimodal 3D Panoptic Segmentation in Autonomous Driving

arXiv cs.CV / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces PanDA, described as the first unsupervised domain adaptation (UDA) framework tailored to multimodal 3D panoptic segmentation for autonomous driving.
  • It addresses two key weaknesses in prior approaches: dependence on strong LiDAR–RGB cross-modal complementarity under domain shifts, and pseudo-labeling that keeps only high-confidence fragments that harm panoptic coverage.
  • PanDA improves robustness to single-sensor degradation by using an asymmetric multimodal augmentation that selectively drops regions to simulate real-world domain shifts.
  • It also enhances pseudo-label completeness and trustworthiness with a dual-expert refinement module that extracts domain-invariant priors from both 2D and 3D modalities.
  • Experiments across shifts in time, weather, location, and sensor conditions show PanDA substantially outperforms existing UDA baselines for 3D semantic segmentation.

Abstract

This paper presents the first study on Unsupervised Domain Adaptation (UDA) for multimodal 3D panoptic segmentation (mm-3DPS), aiming to improve generalization under domain shifts commonly encountered in real-world autonomous driving. A straightforward solution is to employ a pseudo-labeling strategy, which is widely used in UDA to generate supervision for unlabeled target data, combined with an mm-3DPS backbone. However, existing supervised mm-3DPS methods rely heavily on strong cross-modal complementarity between LiDAR and RGB inputs, making them fragile under domain shifts where one modality degrades (e.g., poor lighting or adverse weather). Moreover, conventional pseudo-labeling typically retains only high-confidence regions, leading to fragmented masks and incomplete object supervision, which are issues particularly detrimental to panoptic segmentation. To address these challenges, we propose PanDA, the first UDA framework specifically designed for multimodal 3D panoptic segmentation. To improve robustness against single-sensor degradation, we introduce an asymmetric multimodal augmentation that selectively drops regions to simulate domain shifts and improve robust representation learning. To enhance pseudo-label completeness and reliability, we further develop a dual-expert pseudo-label refinement module that extracts domain-invariant priors from both 2D and 3D modalities. Extensive experiments across diverse domain shifts, spanning time, weather, location, and sensor variations, significantly surpass state-of-the-art UDA baselines for 3D semantic segmentation.

PanDA: Unsupervised Domain Adaptation for Multimodal 3D Panoptic Segmentation in Autonomous Driving | AI Navigate