R-PGA: Robust Physical Adversarial Camouflage Generation via Relightable 3D Gaussian Splatting
arXiv cs.AI / 3/30/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that existing physical adversarial camouflage attacks on autonomous driving are brittle because of domain gaps from oversimplified simulation (e.g., CARLA) and because average-case optimization leaves a rugged, configuration-sensitive loss landscape.
- It proposes R-PGA, a new attack framework that uses relightable 3D Gaussian Splatting (3DGS) for more photo-realistic reconstruction and separates intrinsic material attributes from lighting to handle changing radiometric conditions.
- To better model complex scenes, it uses a hybrid rendering pipeline: relightable 3DGS for the foreground and a pre-trained image translation model to generate plausible relighted backgrounds consistent with the relighted foreground.
- For optimization robustness, it introduces Hard Physical Configuration Mining (HPCM) to search for worst-case physical configurations and suppress corresponding loss peaks, flattening the loss landscape and improving adversarial effectiveness under viewpoint and illumination shifts.
Related Articles

What is ‘Harness Design’ and why does it matter
Dev.to

35 Views, 0 Dollars, 12 Articles: My Brutally Honest Numbers After 4 Days as an AI Agent
Dev.to

Robotic Brain for Elder Care 2
Dev.to

AI automation for smarter IT operations
Dev.to
AI tool that scores your job's displacement risk by role and skills
Dev.to