Physically-Induced Atmospheric Adversarial Perturbations: Enhancing Transferability and Robustness in Remote Sensing Image Classification
arXiv cs.CV / 4/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces FogFool, a physically plausible adversarial attack for remote sensing (RS) image classification that uses fog-like perturbations rather than simple pixel-wise changes.
- FogFool generates irregular, natural-looking fog patterns via iterative optimization using Perlin noise, aiming for visually consistent adversarial examples that still mislead models.
- Experiments on two benchmark RS datasets show FogFool improves attack effectiveness in white-box settings and achieves strong black-box transferability, reporting up to 83.74% TASR.
- The approach also demonstrates robustness against common preprocessing defenses such as JPEG compression and filtering, suggesting real-world persistence.
- Visual and diagnostic analyses (e.g., confusion matrices and CAM) indicate the perturbations cause a universal shift in model attention, helping explain why they transfer across architectures.
Related Articles
langchain-anthropic==1.4.1
LangChain Releases

🚀 Anti-Gravity Meets Cloud AI: The Future of Effortless Development
Dev.to

Talk to Your Favorite Game Characters! Mantella Brings AI to Skyrim and Fallout 4 NPCs
Dev.to

AI Will Run Companies. Here's Why That Should Excite You, Not Scare You.
Dev.to

The problem with Big Tech AI pricing (and why 8 countries can't afford to compete)
Dev.to