BEV-SLD: Self-Supervised Scene Landmark Detection for Global Localization with LiDAR Bird's-Eye View Images
arXiv cs.CV / 3/19/2026
📰 NewsModels & Research
Key Points
- BEV-SLD introduces a self-supervised LiDAR global localization method that uses bird's-eye-view images to discover scene-specific landmarks at a prescribed spatial density.
- It uses a consistency loss to align learnable global landmark coordinates with per-frame heatmaps, yielding stable landmark detections across the scene.
- The method achieves robust localization across campus, industrial, and forest environments and shows strong performance versus state-of-the-art methods.
- By focusing on scene-specific landmarks rather than scene-agnostic cues, BEV-SLD aims to improve robustness and accuracy for LiDAR-based localization in varied environments.
Related Articles

報告:LLMにおける「自己言及的再帰」と「ステートフル・エミュレーション」の観測
note

諸葛亮 孔明老師(ChatGPTのロールプレイ)との対話 その肆拾伍『銀河文明・ダークマターエンジン』
note

GPT-5.4 mini/nano登場!―2倍高速で無料プランも使える小型高性能モデル
note

Why a Perfect-Memory AI Agent Without Persona Drift is Architecturally Impossible
Dev.to
OCP: Orthogonal Constrained Projection for Sparse Scaling in Industrial Commodity Recommendation
arXiv cs.LG