PDF-GS: Progressive Distractor Filtering for Robust 3D Gaussian Splatting

arXiv cs.CV / 4/15/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that standard 3D Gaussian Splatting (3DGS) training is vulnerable to “distractors” (inconsistent multi-view signals) that violate the usual multi-view consistency assumption and produce visual artifacts.
  • It introduces PDF-GS (Progressive Distractor Filtering), a multi-phase optimization framework that progressively filters distractors using discrepancy cues before running a final reconstruction phase for fine, view-consistent details.
  • The method leverages 3DGS’s inherent ability to suppress inconsistent signals, and amplifies it with iterative refinement to produce robust, high-fidelity, distractor-free reconstructions.
  • PDF-GS reports consistent performance improvements over baselines across diverse datasets and challenging real-world conditions.
  • The authors claim the approach is lightweight and adaptable to existing 3DGS pipelines without architectural changes or extra inference overhead, and they release code publicly.

Abstract

Recent advances in 3D Gaussian Splatting (3DGS) have enabled impressive real-time photorealistic rendering. However, conventional training pipelines inherently assume full multi-view consistency among input images, which makes them sensitive to distractors that violate this assumption and cause visual artifacts. In this work, we revisit an underexplored aspect of 3DGS: its inherent ability to suppress inconsistent signals. Building on this insight, we propose PDF-GS (Progressive Distractor Filtering for Robust 3D Gaussian Splatting), a framework that amplifies this self-filtering property through a progressive multi-phase optimization. The progressive filtering phases gradually remove distractors by exploiting discrepancy cues, while the following reconstruction phase restores fine-grained, view-consistent details from the purified Gaussian representation. Through this iterative refinement, PDF-GS achieves robust, high-fidelity, and distractor-free reconstructions, consistently outperforming baselines across diverse datasets and challenging real-world conditions. Moreover, our approach is lightweight and easily adaptable to existing 3DGS frameworks, requiring no architectural changes or additional inference overhead, leading to a new state-of-the-art performance. The code is publicly available at https://github.com/kangrnin/PDF-GS.