Physically-Induced Atmospheric Adversarial Perturbations: Enhancing Transferability and Robustness in Remote Sensing Image Classification

arXiv cs.CV / 4/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces FogFool, a physically plausible adversarial attack for remote sensing (RS) image classification that uses fog-like perturbations rather than simple pixel-wise changes.
  • FogFool generates irregular, natural-looking fog patterns via iterative optimization using Perlin noise, aiming for visually consistent adversarial examples that still mislead models.
  • Experiments on two benchmark RS datasets show FogFool improves attack effectiveness in white-box settings and achieves strong black-box transferability, reporting up to 83.74% TASR.
  • The approach also demonstrates robustness against common preprocessing defenses such as JPEG compression and filtering, suggesting real-world persistence.
  • Visual and diagnostic analyses (e.g., confusion matrices and CAM) indicate the perturbations cause a universal shift in model attention, helping explain why they transfer across architectures.

Abstract

Adversarial attacks pose a severe threat to the reliability of deep learning models in remote sensing (RS) image classification. Most existing methods rely on direct pixel-wise perturbations, failing to exploit the inherent atmospheric characteristics of RS imagery or survive real-world image degradations. In this paper, we propose FogFool, a physically plausible adversarial framework that generates fog-based perturbations by iteratively optimizing atmospheric patterns based on Perlin noise. By modeling fog formations with natural, irregular structures, FogFool generates adversarial examples that are not only visually consistent with authentic RS scenes but also deceptive. By leveraging the spatial coherence and mid-to-low-frequency nature of atmospheric phenomena, FogFool embeds adversarial information into structural features shared across diverse architectures. Extensive experiments on two benchmark RS datasets demonstrate that FogFool achieves superior performance: not only does it exceed in white-box settings, but also exhibits exceptional black-box transferability (reaching 83.74% TASR) and robustness against common preprocessing-based defenses such as JPEG compression and filtering. Detailed analyses, including confusion matrices and Class Activation Map (CAM) visualizations, reveal that our atmospheric-driven perturbations induce a universal shift in model attention. These results indicate that FogFool represents a practical, stealthy, and highly persistent threat to RS classification systems, providing a robust benchmark for evaluating model reliability in complex environments.