Predictive Photometric Uncertainty in Gaussian Splatting for Novel View Synthesis

arXiv cs.CV / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a plug-and-play framework to estimate pixel-wise, view-dependent predictive uncertainty for 3D Gaussian Splatting, aiming to make it reliable for autonomous and safety-critical use cases.
  • It introduces a post-hoc method that models uncertainty via Bayesian-regularized linear least-squares optimization over reconstruction residuals, extracting an uncertainty channel without changing the underlying scene representation.
  • The approach is architecture-agnostic and is designed not to degrade baseline rendering fidelity while still providing uncertainty outputs per primitive.
  • Experiments show that the resulting reliability signal improves state-of-the-art performance on downstream tasks including active view selection, pose-agnostic scene change detection, and pose-agnostic anomaly detection.

Abstract

Recent advances in 3D Gaussian Splatting have enabled impressive photorealistic novel view synthesis. However, to transition from a pure rendering engine to a reliable spatial map for autonomous agents and safety-critical applications, knowing where the representation is uncertain is as important as the rendering fidelity itself. We bridge this critical gap by introducing a lightweight, plug-and-play framework for pixel-wise, view-dependent predictive uncertainty estimation. Our post-hoc method formulates uncertainty as a Bayesian-regularized linear least-squares optimization over reconstruction residuals. This architecture-agnostic approach extracts a per-primitive uncertainty channel without modifying the underlying scene representation or degrading baseline visual fidelity. Crucially, we demonstrate that providing this actionable reliability signal successfully translates 3D Gaussian splatting into a trustworthy spatial map, further improving state-of-the-art performance across three critical downstream perception tasks: active view selection, pose-agnostic scene change detection, and pose-agnostic anomaly detection.