Fluently Lying: Adversarial Robustness Can Be Substrate-Dependent
arXiv cs.CV / 4/2/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The study challenges a common assumption in adversarial monitoring/defense for object detectors that when mAP drops under attack, the number of detections drops proportionally as well.
- It reports a “Quality Corruption (QC)” failure mode on a spiking neural network (SNN) object detector (EMS-YOLO), where standard PGD attacks reduce mAP from 0.528 to 0.042 while retaining over 70% of detections.
- QC is observed only on one of four SNN architectures tested (across both l-infinity and l-2 threat models), indicating that adversarial failure modes may be highly substrate/model-dependent.
- The authors find that five standard defense components fail to detect or mitigate QC on the affected model, suggesting defenses may be tuned to an inaccurate, cross-substrate coupling assumption.
Related Articles

Black Hat Asia
AI Business
v5.5.0
Transformers(HuggingFace)Releases
Bonsai (PrismML's 1 bit version of Qwen3 8B 4B 1.7B) was not an aprils fools joke
Reddit r/LocalLLaMA

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Inference Engines - A visual deep dive into the layers of an LLM
Dev.to