Immunizing 3D Gaussian Generative Models Against Unauthorized Fine-Tuning via Attribute-Space Traps
arXiv cs.CV / 4/14/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper highlights a security risk for public 3D generative models with pre-trained weights: adversaries can fine-tune them to extract specialized knowledge and potentially infringe IP.
- Unlike prior defenses focused on 2D or language models, the work argues that 3D Gaussian representations expose structural parameters directly to gradient-based fine-tuning, requiring specialized protection.
- It proposes GaussLock, a lightweight parameter-space “immunization” method that combines authorized distillation with attribute-aware trap losses targeting position, scale, rotation, opacity, and color.
- The trap losses are designed to systematically degrade the model’s underlying structural integrity (e.g., collapsing spatial distributions and suppressing primitive visibility), disrupting unauthorized reconstructions.
- Experiments on large-scale Gaussian models indicate GaussLock substantially neutralizes unauthorized fine-tuning, worsening unauthorized reconstruction quality (higher LPIPS, lower PSNR) while preserving performance on authorized fine-tuning tasks.



