Adversarial Flow Matching for Imperceptible Attacks on End-to-End Autonomous Driving
arXiv cs.CV / 5/5/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that end-to-end autonomous driving (E2E AD) systems—whether monolithic VLA-style or modular—may share a vulnerability in their Transformer backbones, allowing visually imperceptible perturbations to trigger dangerous behaviors.
- It introduces Adversarial Flow Matching (AFM), a gray-box adversarial attack method that generates adversarial examples efficiently in a single step using a neural average velocity field.
- AFM is designed to produce attacks that are both effective and visually subtle by jointly perturbing the model’s generative latent space and its neural average velocity field.
- Experiments show AFM strongly degrades performance of both VLA and modular AD agents across scenarios while achieving state-of-the-art visual imperceptibility compared with existing baselines.
- The adversarial examples also transfer robustly across models, suggesting AFM approximates a black-box threat model while only requiring prior knowledge that the target includes a Transformer-based module.
Related Articles

Backed by Y Combinator and 20 unicorn founders, Moritz lands $9M
Tech.eu

Why Retail Chargeback Recovery Could Be AgentHansa's First Real PMF
Dev.to

Anthropic Launches AI Services Company with Blackstone & Goldman Sachs
Dev.to

Why B2B Revenue-Recovery Casework Looks Like AgentHansa's Best Early PMF
Dev.to

10 Ways AI Has Become Your Invisible Daily Companion in 2026
Dev.to