Multi-modal panoramic 3D outdoor datasets for place categorization

arXiv cs.RO / 4/16/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper introduces two publicly available multi-modal panoramic 3D outdoor datasets (MPO) for semantic place categorization across six scene categories, including forest, coast, residential/urban areas, and indoor/outdoor parking lots.
  • The dense dataset contains 650 static panoramic scans with synchronized FARO laser point clouds (about 9,000,000 points) plus color and reflectance information.
  • The sparse dataset includes 34,200 real-time panoramic scans collected with a Velodyne LiDAR setup while driving, providing reflectance point clouds with about 70,000 points per scan.
  • Experiments compare multiple semantic place categorization approaches and report best accuracies of 96.42% for dense data and 89.67% for sparse data.
  • Data collection was performed in Fukuoka, Japan, and the authors provide dataset access links for researchers to benchmark and build on.

Abstract

We present two multi-modal panoramic 3D outdoor (MPO) datasets for semantic place categorization with six categories: forest, coast, residential area, urban area and indoor/outdoor parking lot. The first dataset consists of 650 static panoramic scans of dense (9,000,000 points) 3D color and reflectance point clouds obtained using a FARO laser scanner with synchronized color images. The second dataset consists of 34,200 real-time panoramic scans of sparse (70,000 points) 3D reflectance point clouds obtained using a Velodyne laser scanner while driving a car. The datasets were obtained in the city of Fukuoka, Japan and are publicly available in [1], [2]. In addition, we compare several approaches for semantic place categorization with best results of 96.42% (dense) and 89.67% (sparse).