BEV-SLD: Self-Supervised Scene Landmark Detection for Global Localization with LiDAR Bird's-Eye View Images
arXiv cs.CV / 3/19/2026
📰 NewsModels & Research
Key Points
- BEV-SLD introduces a self-supervised LiDAR global localization method that uses bird's-eye-view images to discover scene-specific landmarks at a prescribed spatial density.
- It uses a consistency loss to align learnable global landmark coordinates with per-frame heatmaps, yielding stable landmark detections across the scene.
- The method achieves robust localization across campus, industrial, and forest environments and shows strong performance versus state-of-the-art methods.
- By focusing on scene-specific landmarks rather than scene-agnostic cues, BEV-SLD aims to improve robustness and accuracy for LiDAR-based localization in varied environments.
Related Articles
Data Augmentation Using GANs
Dev.to
Speculative Policy Orchestration: A Latency-Resilient Framework for Cloud-Robotic Manipulation
arXiv cs.RO
Automatic Debiased Machine Learning for Smooth Functionals of Nonparametric M-Estimands
arXiv stat.ML
Preference-Guided Debiasing for No-Reference Enhancement Image Quality Assessment
arXiv cs.CV
Model Selection and Parameter Estimation of Multi-dimensional Gaussian Mixture Model
arXiv stat.ML