Efficient Preemptive Robustification with Image Sharpening

arXiv cs.CV / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that deep neural networks often depend on high-dimensional non-robust representations, leaving them vulnerable to imperceptible adversarial perturbations even during transfer attacks.
  • It reviews prior defenses (training-time and post-attack) and focuses on a pre-attack paradigm called preemptive robustification, which modifies benign inputs before an attacker perturbs them.
  • The authors propose an efficient robustification method using image sharpening, motivated by findings that higher texture intensity correlates with robustness.
  • They claim the approach is the first to be surrogate-free, optimization-free, generator-free, and human-interpretable, avoiding earlier limitations like surrogate reliance, high computational overhead, and poor interpretability.
  • Experiments show sharpening provides strong robustness improvements with low compute cost, particularly for transfer scenarios.

Abstract

Despite their great success, deep neural networks rely on high-dimensional, non-robust representations, making them vulnerable to imperceptible perturbations, even in transfer scenarios. To address this, both training-time defenses (e.g., adversarial training and robust architecture design) and post-attack defenses (e.g., input purification and adversarial detection) have been extensively studied. Recently, a limited body of work has preliminarily explored a pre-attack defense paradigm, termed preemptive robustification, which introduces subtle modifications to benign samples prior to attack to proactively resist adversarial perturbations. Unfortunately, their practical applicability remains questionable due to several limitations, including (1) reliance on well-trained classifiers as surrogates to provide robustness priors, (2) substantial computational overhead arising from iterative optimization or trained generators for robustification, and (3) limited interpretability of the optimization- or generation-based robustification processes. Inspired by recent studies revealing a positive correlation between texture intensity and the robustness of benign samples, we show that image sharpening alone can efficiently robustify images. To the best of our knowledge, this is the first surrogate-free, optimization-free, generator-free, and human-interpretable robustification approach. Extensive experiments demonstrate that sharpening yields remarkable robustness gains with low computational cost, especially in transfer scenarios.