Fast and Robust Deformable 3D Gaussian Splatting

arXiv cs.CV / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces FRoG, a framework for fast and robust deformable 3D Gaussian Splatting aimed at high-quality dynamic scene reconstruction.
  • FRoG improves rendering speed by using per-Gaussian embeddings with a coarse-to-fine temporal embedding scheme and early fusion of temporal embeddings.
  • To address sensitivity to sparse initial point clouds, it proposes a depth- and error-guided sampling strategy that adds Gaussians near low-deviation canonical positions to reduce the optimization load on the deformation field.
  • The method mitigates local optima issues in dim scenes by modulating opacity variations, which improves color fidelity.
  • Experiments reportedly confirm that FRoG achieves accelerated rendering while preserving state-of-the-art visual quality for both static and dynamic regions.

Abstract

3D Gaussian Splatting has demonstrated remarkable real-time rendering capabilities and superior visual quality in novel view synthesis for static scenes. Building upon these advantages, researchers have progressively extended 3D Gaussians to dynamic scene reconstruction. Deformation field-based methods have emerged as a promising approach among various techniques. These methods maintain 3D Gaussian attributes in a canonical field and employ the deformation field to transform this field across temporal sequences. Nevertheless, these approaches frequently encounter challenges such as suboptimal rendering speeds, significant dependence on initial point clouds, and vulnerability to local optima in dim scenes. To overcome these limitations, we present FRoG, an efficient and robust framework for high-quality dynamic scene reconstruction. FRoG integrates per-Gaussian embedding with a coarse-to-fine temporal embedding strategy, accelerating rendering through the early fusion of temporal embeddings. Moreover, to enhance robustness against sparse initializations, we introduce a novel depth- and error-guided sampling strategy. This strategy populates the canonical field with new 3D Gaussians at low-deviation initial positions, significantly reducing the optimization burden on the deformation field and improving detail reconstruction in both static and dynamic regions. Furthermore, by modulating opacity variations, we mitigate the local optima problem in dim scenes, improving color fidelity. Comprehensive experimental results validate that our method achieves accelerated rendering speeds while maintaining state-of-the-art visual quality.