Towards Practical Lossless Neural Compression for LiDAR Point Clouds

arXiv cs.CV / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles inefficient context modeling in lossless compression of sparse, high-precision LiDAR point clouds by introducing a compact, predictive coding framework aimed at higher speed and performance.
  • It proposes a Geometry Re-Densification module that iteratively densifies sparse geometry, extracts features at a dense scale, then sparsifies those features to keep prediction lightweight while avoiding expensive computation on extremely sparse details.
  • It adds a Cross-scale Feature Propagation module that uses occupancy cues across multiple resolutions to guide hierarchical feature sharing, reducing redundant feature extraction.
  • The authors introduce an integer-only inference pipeline for bit-exact, cross-platform consistency to prevent “entropy-coding collapse” seen in some neural compression approaches, improving coding stability and acceleration.
  • Experimental results report competitive compression performance while operating at real-time speed, with code to be released upon acceptance and a repository provided now.

Abstract

LiDAR point clouds are fundamental to various applications, yet the extreme sparsity of high-precision geometric details hinders efficient context modeling, thereby limiting the compression speed and performance of existing methods. To address this challenge, we propose a compact representation for efficient predictive lossless coding. Our framework comprises two lightweight modules. First, the Geometry Re-Densification Module iteratively densifies encoded sparse geometry, extracts features at a dense scale, and then sparsifies the features for predictive coding. This module avoids costly computation on highly sparse details while maintaining a lightweight prediction head. Second, the Cross-scale Feature Propagation Module leverages occupancy cues from multiple resolution levels to guide hierarchical feature propagation, enabling information sharing across scales and reducing redundant feature extraction. Additionally, we introduce an integer-only inference pipeline to enable bit-exact cross-platform consistency, which avoids the entropy-coding collapse observed in existing neural compression methods and further accelerates coding. Experiments demonstrate competitive compression performance at real-time speed. Code will be released upon acceptance. Code is available at https://github.com/pengpeng-yu/FastPCC.