PASTA: A Patch-Agnostic Twofold-Stealthy Backdoor Attack on Vision Transformers
arXiv cs.CV / 4/23/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper highlights that Vision Transformers (ViTs) are vulnerable to patch-wise backdoor attacks, and that existing approaches often assume a single fixed trigger location during inference.
- It introduces the Trigger Radiating Effect (TRE), showing that placing a patch trigger can become highly effective when it activates backdoors across neighboring patches through ViT self-attention.
- It proposes PASTA, a twofold stealthy backdoor attack that operates in both pixel and attention domains and can activate when the trigger is placed at arbitrary patches during inference.
- To balance strong TRE with stealthiness, the authors use inter-patch trigger insertion during training and an adaptive, bi-level optimization framework that iteratively updates the model and trigger to avoid local optima.
- Experiments report an average 99.13% attack success rate across arbitrary patches, along with large gains in stealthiness and improved robustness versus multiple state-of-the-art ViT defenses across four datasets.
Related Articles

The anti-AI crowd is giving “real farmers don’t use tractors” energy, and it’s getting old.
Dev.to

Training ChatGPT on Private Data: A Technical Reference
Dev.to

The Rise of Intelligent Software: How AI is Reshaping Modern Product Development
Dev.to

AI Tutor and Doubt Solver — EaseLearn AI Complete Review 2026
Dev.to

Why all AI-coding plans are getting more expensive?
Dev.to