Physical Adversarial Attacks on AI Surveillance Systems:Detection, Tracking, and Visible--Infrared Evasion
arXiv cs.CV / 4/9/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that physical adversarial attacks should be evaluated in surveillance-like settings where detection, multi-object tracking, and visible–infrared sensing interact over time.
- It explains why per-frame RGB results can be misleading for real systems, especially for night-time or dual-modal (visible + thermal) deployments.
- The review emphasizes key technical dimensions—temporal persistence, sensing modality, realism of the physical attack carrier, and system-level attack objectives—organized into a four-part taxonomy.
- It discusses how recent work on multi-object tracking evasion, dual-modal visible–infrared attacks, and controllable clothing illustrates a shift in how the field should interpret robustness.
- It highlights unresolved evaluation gaps such as robustness to distance and camera-pipeline variation, the need for identity-level metrics, and testing that accounts for activation-aware threats.
Related Articles

Black Hat Asia
AI Business

Amazon CEO takes aim at Nvidia, Intel, Starlink, more in annual shareholder letter
TechCrunch

Why Anthropic’s new model has cybersecurity experts rattled
Reddit r/artificial
Does the AI 2027 paper still hold any legitimacy?
Reddit r/artificial

Why Most Productivity Systems Fail (And What to Do Instead)
Dev.to