Feasibility of Indoor Frame-Wise Lidar Semantic Segmentation via Distillation from Visual Foundation Model

arXiv cs.CV / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the high cost of frame-wise ground truth for training lidar semantic segmentation models in indoor environments by leveraging Visual Foundation Models (VFMs).
  • It proposes a frame-wise 2D-to-3D distillation pipeline that couples each lidar scan with a camera image processed by a VFM to generate pseudo supervision for lidar segmentation.
  • The authors evaluate feasibility using indoor SLAM datasets with pseudo-labels for downstream assessment, and also validate with a small manually annotated lidar dataset because no comparable indoor lidar semantic datasets exist.
  • Experimental results indicate the distilled lidar model can reach up to 56% mIoU with pseudo-label evaluation and about 36% mIoU using real manual labels, supporting the feasibility of cross-modal distillation without manual annotation at scale.

Abstract

Frame-wise semantic segmentation of indoor lidar scans is a fundamental step toward higher-level 3D scene understanding and mapping applications. However, acquiring frame-wise ground truth for training deep learning models is costly and time-consuming. This challenge is largely addressed, for imagery, by Visual Foundation Models (VFMs) which segment image frames. The same VFMs may be used to train a lidar scan frame segmentation model via a 2D-to-3D distillation pipeline. The success of such distillation has been shown for autonomous driving scenes, but not yet for indoor scenes. Here, we study the feasibility of repeating this success for indoor scenes, in a frame-wise distillation manner by coupling each lidar scan with a VFM-processed camera image. The evaluation is done using indoor SLAM datasets, where pseudo-labels are used for downstream evaluation. Also, a small manually annotated lidar dataset is provided for validation, as there are no other lidar frame-wise indoor datasets with semantics. Results show that the distilled model achieves up to 56% mIoU under pseudo-label evaluation and around 36% mIoU with real-label, demonstrating the feasibility of cross-modal distillation for indoor lidar semantic segmentation without manual annotations.