Relaxed Rigidity with Ray-based Grouping for Dynamic Gaussian Splatting

arXiv cs.CV / 3/27/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a core problem in dynamic 3D Gaussian Splatting: predicted motion of Gaussians often fails to match physically plausible dynamics, especially on monocular video datasets.
  • It proposes a view-space ray grouping strategy that clusters Gaussians hit by the same camera ray, filtering by an alpha-blending weight threshold.
  • By applying constraints across these ray-based groups, the method preserves consistent spatial distribution and stabilizes local geometry over time in 4D scenes.
  • The approach aims to remove dependence on external temporal priors such as optical flow or 2D tracking by enforcing coherence internally through the grouped-geometry constraints.
  • Experiments on difficult monocular datasets show improved temporal consistency and reconstruction quality, with the method outperforming existing approaches when integrated into two baseline models.

Abstract

The reconstruction of dynamic 3D scenes using 3D Gaussian Splatting has shown significant promise. A key challenge, however, remains in modeling realistic motion, as most methods fail to align the motion of Gaussians with real-world physical dynamics. This misalignment is particularly problematic for monocular video datasets, where failing to maintain coherent motion undermines local geometric structure, ultimately leading to degraded reconstruction quality. Consequently, many state-of-the-art approaches rely heavily on external priors, such as optical flow or 2D tracks, to enforce temporal coherence. In this work, we propose a novel method to explicitly preserve the local geometric structure of Gaussians across time in 4D scenes. Our core idea is to introduce a view-space ray grouping strategy that clusters Gaussians intersected by the same ray, considering only those whose \alpha-blending weights exceed a threshold. We then apply constraints to these groups to maintain a consistent spatial distribution, effectively preserving their local geometry. This approach enforces a more physically plausible motion model by ensuring that local geometry remains stable over time, eliminating the reliance on external guidance. We demonstrate the efficacy of our method by integrating it into two distinct baseline models. Extensive experiments on challenging monocular datasets show that our approach significantly outperforms existing methods, achieving superior temporal consistency and reconstruction quality.