Immunizing 3D Gaussian Generative Models Against Unauthorized Fine-Tuning via Attribute-Space Traps

arXiv cs.CV / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper highlights a security risk for public 3D generative models with pre-trained weights: adversaries can fine-tune them to extract specialized knowledge and potentially infringe IP.
  • Unlike prior defenses focused on 2D or language models, the work argues that 3D Gaussian representations expose structural parameters directly to gradient-based fine-tuning, requiring specialized protection.
  • It proposes GaussLock, a lightweight parameter-space “immunization” method that combines authorized distillation with attribute-aware trap losses targeting position, scale, rotation, opacity, and color.
  • The trap losses are designed to systematically degrade the model’s underlying structural integrity (e.g., collapsing spatial distributions and suppressing primitive visibility), disrupting unauthorized reconstructions.
  • Experiments on large-scale Gaussian models indicate GaussLock substantially neutralizes unauthorized fine-tuning, worsening unauthorized reconstruction quality (higher LPIPS, lower PSNR) while preserving performance on authorized fine-tuning tasks.

Abstract

Recent large-scale generative models enable high-quality 3D synthesis. However, the public accessibility of pre-trained weights introduces a critical vulnerability. Adversaries can fine-tune these models to steal specialized knowledge acquired during pre-training, leading to intellectual property infringement. Unlike defenses for 2D images and language models, 3D generators require specialized protection due to their explicit Gaussian representations, which expose fundamental structural parameters directly to gradient-based optimization. We propose GaussLock, the first approach designed to defend 3D generative models against fine-tuning attacks. GaussLock is a lightweight parameter-space immunization framework that integrates authorized distillation with attribute-aware trap losses targeting position, scale, rotation, opacity, and color. Specifically, these traps systematically collapse spatial distributions, distort geometric shapes, align rotational axes, and suppress primitive visibility to fundamentally destroy structural integrity. By jointly optimizing these dual objectives, the distillation process preserves fidelity on authorized tasks while the embedded traps actively disrupt unauthorized reconstructions. Experiments on large-scale Gaussian models demonstrate that GaussLock effectively neutralizes unauthorized fine-tuning attacks. It substantially degrades the quality of unauthorized reconstructions, evidenced by significantly higher LPIPS and lower PSNR, while effectively maintaining performance on authorized fine-tuning.