Transferable Physical-World Adversarial Patches Against Object Detection in Autonomous Driving
arXiv cs.CV / 4/28/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces AdvAD, a transferable physical-world adversarial patch attack aimed at object detection systems used in autonomous driving.
- Unlike prior approaches that attack a single detector model, AdvAD jointly optimizes patches across multiple detection models to exploit vulnerabilities common across different architectures.
- The method adaptively balances each model’s influence during optimization and incorporates constraints to improve robustness to real physical variations.
- Experiments in both digital simulations and real-world tests show AdvAD achieves stronger performance and significantly better transferability than existing state-of-the-art attacks.
- Overall, the work highlights a more practical and scalable threat model for adversarial patch attacks against safety-critical AD perception pipelines.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Same Agent, Different Risk | How Microsoft 365 Copilot Grounding Changes the Security Model | Rahsi Framework™
Dev.to

Claude Haiku for Low-Cost AI Inference: Patterns from a Horse Racing Prediction System
Dev.to

How We Built an Ambient AI Clinical Documentation Pipeline (and Saved Doctors 8+ Hours a Week)
Dev.to

🦀 PicoClaw Deep Dive — A Field Guide to Building an Ultra-Light AI Agent in Go 🐹
Dev.to