Almost for Free: Crafting Adversarial Examples with Convolutional Image Filters
arXiv cs.LG / 5/5/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces a gradient-free method for generating adversarial examples by designing adversarial image filters inspired by explainable ML and classic edge detection.
- These learned 3x3 (and related) convolutional filters enable untargeted adversarial attacks that can transfer across different neural networks and are produced with a single forward pass.
- Experiments show that 3x3 filters achieve success rates roughly in the 30%–80% range across multiple models, demonstrating practical attack strength.
- Compared with generative-model-based adversarial crafting, the approach drastically reduces parameter counts by about five orders of magnitude, making it far more efficient.
- The authors analyze the learned filter parameters and find structure and transferability patterns that relate to features commonly found in traditional image filters, reinforcing concerns about neural network fragility to malicious perturbations.
Related Articles

Why Retail Chargeback Recovery Could Be AgentHansa's First Real PMF
Dev.to

Why B2B Revenue-Recovery Casework Looks Like AgentHansa's Best Early PMF
Dev.to

10 Ways AI Has Become Your Invisible Daily Companion in 2026
Dev.to

When a Bottling Line Stops at 2 A.M., the Agent That Wins Is the One That Finds the Right Replacement Part
Dev.to

My ‘Busy’ Button Is a Chat Window: 8 Hours of Sorting & Broccoli Poetry
Dev.to