AI Navigate

Spectral-Geometric Neural Fields for Pose-Free LiDAR View Synthesis

arXiv cs.CV / 3/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • SG-NLF presents a pose-free LiDAR NeRF framework that fuses spectral information with geometric consistency to address LiDAR sparsity and textureless regions.
  • The method uses a hybrid representation with spectral priors to reconstruct smoother geometry and a confidence-aware graph for global pose alignment during optimization.
  • An adversarial learning strategy enforces cross-frame consistency to boost reconstruction quality, especially in challenging low-frequency scenarios.
  • Experimental results show significant improvements over prior state-of-the-art, with reconstruction quality and pose accuracy gains of 35.8% and 68.8%, respectively.

Abstract

Neural Radiance Fields (NeRF) have shown remarkable success in image novel view synthesis (NVS), inspiring extensions to LiDAR NVS. However, most methods heavily rely on accurate camera poses for scene reconstruction. The sparsity and textureless nature of LiDAR data also present distinct challenges, leading to geometric holes and discontinuous surfaces. To address these issues, we propose SG-NLF, a pose-free LiDAR NeRF framework that integrates spectral information with geometric consistency. Specifically, we design a hybrid representation based on spectral priors to reconstruct smooth geometry. For pose optimization, we construct a confidence-aware graph based on feature compatibility to achieve global alignment. In addition, an adversarial learning strategy is introduced to enforce cross-frame consistency, thereby enhancing reconstruction quality. Comprehensive experiments demonstrate the effectiveness of our framework, especially in challenging low-frequency scenarios. Compared to previous state-of-the-art methods, SG-NLF improves reconstruction quality and pose accuracy by over 35.8% and 68.8%. Our work can provide a novel perspective for LiDAR view synthesis.