AI Navigate

Complementary Text-Guided Attention for Zero-Shot Adversarial Robustness

arXiv cs.CV / 3/20/2026

💬 OpinionModels & Research

Key Points

  • The authors observe that adversarial perturbations induce shifts in text-guided attention in CLIP-like models, motivating robustness improvements.
  • They propose Text-Guided Attention for Zero-Shot Robustness (TGA-ZSR) with a Local Attention Refinement Module and a Global Attention Constraint Module to improve robustness while preserving clean accuracy.
  • They further introduce Complementary Text-Guided Attention (Comp-TGA), which combines class-prompt guided attention with reversed attention from the non-class prompt to better capture foreground details.
  • Experimental results show 9.58% and 11.95% improvements in zero-shot robust accuracy for TGA-ZSR and Comp-TGA, respectively, across 16 datasets.

Abstract

Due to the impressive zero-shot capabilities, pre-trained vision-language models (e.g., CLIP), have attracted widespread attention and adoption across various domains. Nonetheless, CLIP has been observed to be susceptible to adversarial examples. Through experimental analysis, we have observed a phenomenon wherein adversarial perturbations induce shifts in text-guided attention. Building upon this observation, we propose a simple yet effective strategy: Text-Guided Attention for Zero-Shot Robustness (TGA-ZSR). This framework incorporates two components: Local Attention Refinement Module and Global Attention Constraint Module. Our goal is to maintain the generalization of the CLIP model and enhance its adversarial robustness. Additionally, the Global Attention Constraint Module acquires text-guided attention from both the target and original models using clean examples. Its objective is to maintain model performance on clean samples while enhancing overall robustness. However, we observe that the method occasionally focuses on irrelevant or spurious features, which can lead to suboptimal performance and undermine its robustness in certain scenarios. To overcome this limitation, we further propose a novel approach called Complementary Text-Guided Attention (Comp-TGA). This method integrates two types of foreground attention: attention guided by the class prompt and reversed attention driven by the non-class prompt. These complementary attention mechanisms allow the model to capture a more comprehensive and accurate representation of the foreground. The experiments validate that TGA-ZSR and Comp-TGA yield 9.58% and 11.95% improvements respectively, in zero-shot robust accuracy over the current state-of-the-art techniques across 16 datasets.