MesonGS++: Post-training Compression of 3D Gaussian Splatting with Hyperparameter Searching

arXiv cs.CV / 4/30/2026

📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The paper introduces MesonGS++, a size-aware post-training codec designed to compress 3D Gaussian Splatting (3DGS) while controlling the final output size more effectively than prior methods.
  • MesonGS++ improves compression by combining importance-based pruning, octree geometry coding, attribute transformations, selective vector quantization for higher-order spherical harmonics, and group-wise mixed-precision quantization with entropy coding.
  • Instead of treating many hyperparameters as tightly coupled, it focuses on reserve ratio and bit-width allocation as the main rate–distortion knobs, optimizing them under a target storage budget via discrete sampling and 0–1 integer linear programming.
  • To speed up the search, the authors propose a linear size estimator and a CUDA parallel quantization operator.
  • Experiments on multiple scenes show MesonGS++ exceeds 34× compression while preserving rendering quality, can outperform baseline 3DGS PSNR at 20× compression without retraining, and the implementation is released on GitHub.

Abstract

3D Gaussian Splatting (3DGS) achieves high-quality novel view synthesis with real-time rendering, but its storage cost remains prohibitive for practical deployment. Existing post-training compression methods still rely on many coupled hyperparameters across pruning, transformation, quantization, and entropy coding, making it difficult to control the final compressed size and fully exploit the rate-distortion trade-off. We propose MesonGS++, a size-aware post-training codec for 3D Gaussian compression. On the codec side, MesonGS++ combines joint importance-based pruning, octree geometry coding, attribute transformation, selective vector quantization for higher-degree spherical harmonics, and group-wise mixed-precision quantization with entropy coding. On the configuration side, it treats the reserve ratio and bit-width allocation as the dominant rate-distortion knobs and jointly optimizes them under a target storage budget via discrete sampling and 0--1 integer linear programming. We further propose a linear size estimator and a CUDA parallel quantization operator to accelerate the hyperparameter searching process. Extensive experiments show that MesonGS++ achieves over 34\times compression while preserving rendering fidelity, outperforming state-of-the-art post-training methods and accurately meeting target size budgets. Remarkably, without any training, MesonGS++ can even surpass the PSNR of vanilla 3DGS at a 20\times compression rate on the Stump scene. Our code is available at https://github.com/mmlab-sigs/mesongs_plus