AdvSplat: Adversarial Attacks on Feed-Forward Gaussian Splatting Models

arXiv cs.CV / 3/26/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies adversarial manipulation risks for feed-forward 3D Gaussian Splatting (feed-forward 3DGS), which enables fast, few-view 3D reconstruction after large-scale pretraining without per-scene optimization.
  • It introduces AdvSplat as the first systematic investigation of adversarial attacks against this model family, using white-box methods to identify core vulnerabilities.
  • The authors propose two query-efficient black-box attack algorithms that craft imperceptible pixel-space perturbations using frequency-domain parameterization—one gradient-estimation-based and one gradient-free.
  • Experiments across multiple datasets show the attacks can significantly disrupt reconstruction outputs despite perturbations being visually subtle, highlighting an urgent robustness/security gap.
  • The work aims to raise community awareness of adversarial threats as feed-forward 3DGS moves closer to potential commercial deployment.

Abstract

3D Gaussian Splatting (3DGS) is increasingly recognized as a powerful paradigm for real-time, high-fidelity 3D reconstruction. However, its per-scene optimization pipeline limits scalability and generalization, and prevents efficient inference. Recently emerged feed-forward 3DGS models address these limitations by enabling fast reconstruction from a few input views after large-scale pretraining, without scene-specific optimization. Despite their advantages and strong potential for commercial deployment, the use of neural networks as the backbone also amplifies the risk of adversarial manipulation. In this paper, we introduce AdvSplat, the first systematic study of adversarial attacks on feed-forward 3DGS. We first employ white-box attacks to reveal fundamental vulnerabilities of this model family. We then develop two improved, practically relevant, query-efficient black-box algorithms that optimize pixel-space perturbations via a frequency-domain parameterization: one based on gradient estimation and the other gradient-free, without requiring any access to model internals. Extensive experiments across multiple datasets demonstrate that AdvSplat can significantly disrupt reconstruction results by injecting imperceptible perturbations into the input images. Our findings surface an overlooked yet urgent problem in this domain, and we hope to draw the community's attention to this emerging security and robustness challenge.