GlobalSplat: Efficient Feed-Forward 3D Gaussian Splatting via Global Scene Tokens

arXiv cs.CV / 4/17/2026

📰 NewsModels & Research

Key Points

  • GlobalSplat targets a core bottleneck in 3D Gaussian Splatting: efficient primitive allocation that balances compact representation size, fast reconstruction, and high rendering fidelity.
  • The paper argues that prior feed-forward methods rely on local, alignment-driven heuristics (often pixel/voxel aligned), which introduces redundancy and makes global consistency fragile as more views are used.
  • GlobalSplat uses an “align first, decode later” design by learning a compact global latent scene representation that encodes multi-view inputs and resolves cross-view correspondences before any explicit 3D geometry is decoded.
  • A coarse-to-fine training curriculum that increases decoded capacity gradually helps prevent representation “bloat,” and the method avoids dependence on pretrained pixel-prediction backbones or dense baseline feature reuse.
  • Experiments on RealEstate10K and ACID show competitive novel-view synthesis with as few as 16K Gaussians (about a 4MB footprint) and fast inference of ~78 ms per single forward pass versus baseline approaches.

Abstract

The efficient spatial allocation of primitives serves as the foundation of 3D Gaussian Splatting, as it directly dictates the synergy between representation compactness, reconstruction speed, and rendering fidelity. Previous solutions, whether based on iterative optimization or feed-forward inference, suffer from significant trade-offs between these goals, mainly due to the reliance on local, heuristic-driven allocation strategies that lack global scene awareness. Specifically, current feed-forward methods are largely pixel-aligned or voxel-aligned. By unprojecting pixels into dense, view-aligned primitives, they bake redundancy into the 3D asset. As more input views are added, the representation size increases and global consistency becomes fragile. To this end, we introduce GlobalSplat, a framework built on the principle of align first, decode later. Our approach learns a compact, global, latent scene representation that encodes multi-view input and resolves cross-view correspondences before decoding any explicit 3D geometry. Crucially, this formulation enables compact, globally consistent reconstructions without relying on pretrained pixel-prediction backbones or reusing latent features from dense baselines. Utilizing a coarse-to-fine training curriculum that gradually increases decoded capacity, GlobalSplat natively prevents representation bloat. On RealEstate10K and ACID, our model achieves competitive novel-view synthesis performance while utilizing as few as 16K Gaussians, significantly less than required by dense pipelines, obtaining a light 4MB footprint. Further, GlobalSplat enables significantly faster inference than the baselines, operating under 78 milliseconds in a single forward pass. Project page is available at https://r-itk.github.io/globalsplat/