PASTA: A Patch-Agnostic Twofold-Stealthy Backdoor Attack on Vision Transformers

arXiv cs.CV / 4/23/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper highlights that Vision Transformers (ViTs) are vulnerable to patch-wise backdoor attacks, and that existing approaches often assume a single fixed trigger location during inference.
  • It introduces the Trigger Radiating Effect (TRE), showing that placing a patch trigger can become highly effective when it activates backdoors across neighboring patches through ViT self-attention.
  • It proposes PASTA, a twofold stealthy backdoor attack that operates in both pixel and attention domains and can activate when the trigger is placed at arbitrary patches during inference.
  • To balance strong TRE with stealthiness, the authors use inter-patch trigger insertion during training and an adaptive, bi-level optimization framework that iteratively updates the model and trigger to avoid local optima.
  • Experiments report an average 99.13% attack success rate across arbitrary patches, along with large gains in stealthiness and improved robustness versus multiple state-of-the-art ViT defenses across four datasets.

Abstract

Vision Transformers (ViTs) have achieved remarkable success across vision tasks, yet recent studies show they remain vulnerable to backdoor attacks. Existing patch-wise attacks typically assume a single fixed trigger location during inference to maximize trigger attention. However, they overlook the self-attention mechanism in ViTs, which captures long-range dependencies across patches. In this work, we observe that a patch-wise trigger can achieve high attack effectiveness when activating backdoors across neighboring patches, a phenomenon we term the Trigger Radiating Effect (TRE). We further find that inter-patch trigger insertion during training can synergistically enhance TRE compared to single-patch insertion. Prior ViT-specific attacks that maximize trigger attention often sacrifice visual and attention stealthiness, making them detectable. Based on these insights, we propose PASTA, a twofold stealthy patch-wise backdoor attack in both pixel and attention domains. PASTA enables backdoor activation when the trigger is placed at arbitrary patches during inference. To achieve this, we introduce a multi-location trigger insertion strategy to enhance TRE. However, preserving stealthiness while maintaining strong TRE is challenging, as TRE is weakened under stealthy constraints. We therefore formulate a bi-level optimization problem and propose an adaptive backdoor learning framework, where the model and trigger iteratively adapt to each other to avoid local optima. Extensive experiments show that PASTA achieves 99.13% attack success rate across arbitrary patches on average, while significantly improving visual and attention stealthiness (144.43x and 18.68x) and robustness (2.79x) against state-of-the-art ViT defenses across four datasets, outperforming CNN- and ViT-based baselines.