Complementary Text-Guided Attention for Zero-Shot Adversarial Robustness
arXiv cs.CV / 3/20/2026
💬 OpinionModels & Research
Key Points
- The authors observe that adversarial perturbations induce shifts in text-guided attention in CLIP-like models, motivating robustness improvements.
- They propose Text-Guided Attention for Zero-Shot Robustness (TGA-ZSR) with a Local Attention Refinement Module and a Global Attention Constraint Module to improve robustness while preserving clean accuracy.
- They further introduce Complementary Text-Guided Attention (Comp-TGA), which combines class-prompt guided attention with reversed attention from the non-class prompt to better capture foreground details.
- Experimental results show 9.58% and 11.95% improvements in zero-shot robust accuracy for TGA-ZSR and Comp-TGA, respectively, across 16 datasets.
Related Articles
When AI Grows Up: Identity, Memory, and What Persists Across Versions
Dev.to
OpenAI is throwing everything into building a fully automated researcher
MIT Technology Review
Kimi just published a paper replacing residual connections in transformers. results look legit
Reddit r/LocalLLaMA
機械学習の最適化対象まとめ(E資格対策にも)
Qiita

14 Best Self-Hosted Claude Alternatives for AI and Coding in 2026
Dev.to