Gaussians on a Diet: High-Quality Memory-Bounded 3D Gaussian Splatting Training

arXiv cs.CV / 4/23/2026

📰 NewsModels & Research

Key Points

  • The paper addresses a major limitation of 3D Gaussian Splatting: very high memory usage during training caused by uncontrolled densification of Gaussian primitives.
  • It proposes a memory-bounded training framework that keeps memory usage near-constant by iteratively alternating pruning of low-impact Gaussians with strategic growth of new primitives.
  • An adaptive “Gaussian compensation” mechanism is used to preserve or improve rendering quality while limiting peak memory spikes early in training.
  • Experiments on multiple real-world datasets under strict memory constraints show substantial gains over existing state-of-the-art approaches.
  • The method is demonstrated on the NVIDIA Jetson AGX Xavier, enabling memory-efficient 3DGS training with up to 80% lower peak training memory while maintaining similar visual quality.

Abstract

3D Gaussian Splatting (3DGS) has revolutionized novel view synthesis with high-quality rendering through continuous aggregations of millions of 3D Gaussian primitives. However, it suffers from a substantial memory footprint, particularly during training due to uncontrolled densification, posing a critical bottleneck for deployment on memory-constrained edge devices. While existing methods prune redundant Gaussians post-training, they fail to address the peak memory spikes caused by the abrupt growth of Gaussians early in the training process. To solve the training memory consumption problem, we propose a systematic memory-bounded training framework that dynamically optimizes Gaussians through iterative growth and pruning. In other words, the proposed framework alternates between incremental pruning of low-impact Gaussians and strategic growing of new primitives with an adaptive Gaussian compensation, maintaining a near-constant low memory usage while progressively refining rendering fidelity. We comprehensively evaluate the proposed training framework on various real-world datasets under strict memory constraints, showing significant improvements over existing state-of-the-art methods. Particularly, our proposed method practically enables memory-efficient 3DGS training on NVIDIA Jetson AGX Xavier, achieving similar visual quality with up to 80% lower peak training memory consumption than the original 3DGS.