PROBE: Probabilistic Occupancy BEV Encoding with Analytical Translation Robustness for 3D Place Recognition

arXiv cs.RO / 5/6/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The PROBE method introduces a learning-free LiDAR 3D place recognition descriptor that represents BEV grid cell occupancy as a Bernoulli random variable.
  • Instead of using discrete point-cloud perturbations, PROBE analytically marginalizes continuous Cartesian translation uncertainty using a polar Jacobian to produce distance-adaptive angular uncertainty efficiently.
  • Similarity scoring is built from a Bernoulli-KL Jaccard term with uncertainty gating, combined with FFT-based height cosine similarity to support rotation alignment.
  • Experiments across four datasets with four different LiDAR types show PROBE achieves the best accuracy among handcrafted descriptors in multi-session evaluation and strong performance in single-session settings versus both handcrafted and supervised baselines.
  • The work emphasizes a sensor-independent physical parameter (expected translational uncertainty in meters) to improve cross-sensor generalization and reduce dataset-specific tuning, with code released publicly.

Abstract

We present PROBE (PRobabilistic Occupancy BEV Encoding), a learning-free LiDAR place recognition descriptor that models each BEV cell's occupancy as a Bernoulli random variable. Rather than relying on discrete point-cloud perturbations, PROBE analytically marginalizes over continuous Cartesian translations via the polar Jacobian, yielding a distance-adaptive angular uncertainty \sigma_\theta = \sigma_t / r in \mathcal{O}(R{\cdot}S) time. The primary parameter \sigma_t represents the expected translational uncertainty in meters, a sensor-independent physical quantity that enhances cross-sensor generalization while reducing the need for extensive per-dataset tuning. Pairwise similarity combines a Bernoulli-KL Jaccard with exponential uncertainty gating and FFT-based height cosine similarity for rotation alignment. Evaluated on four datasets spanning four diverse LiDAR types, PROBE achieves the highest accuracy among handcrafted descriptors in multi-session evaluation and competitive single-session performance relative to both handcrafted and supervised baselines. The source code and supplementary materials are available at https://sites.google.com/view/probe-pr.