AI Navigate

Backdoor Directions in Vision Transformers

arXiv cs.CV / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies a specific 'trigger direction' in Vision Transformer activations that encodes the internal representation of a backdoor when a trigger is present.
  • It demonstrates the causal role of this direction by showing that interventions in both activation and parameter space consistently modulate backdoor behavior across multiple datasets and attack types.
  • The trigger direction is used as a diagnostic tool to trace how backdoor features are processed across layers, revealing distinct logic for static-patch versus stealthy distributed triggers.
  • The study examines the link between backdoors and adversarial attacks, testing whether PGD-based perturbations can (de-)activate the identified trigger mechanism.
  • It proposes a data-free, weight-based detection scheme for stealthy-trigger attacks, illustrating how mechanistic interpretability can diagnose and address security vulnerabilities in computer vision.

Abstract

This paper investigates how Backdoor Attacks are represented within Vision Transformers (ViTs). By assuming knowledge of the trigger, we identify a specific ``trigger direction'' in the model's activations that corresponds to the internal representation of the trigger. We confirm the causal role of this linear direction by showing that interventions in both activation and parameter space consistently modulate the model's backdoor behavior across multiple datasets and attack types. Using this direction as a diagnostic tool, we trace how backdoor features are processed across layers. Our analysis reveals distinct qualitative differences: static-patch triggers follow a different internal logic than stealthy, distributed triggers. We further examine the link between backdoors and adversarial attacks, specifically testing whether PGD-based perturbations (de-)activate the identified trigger mechanism. Finally, we propose a data-free, weight-based detection scheme for stealthy-trigger attacks. Our findings show that mechanistic interpretability offers a robust framework for diagnosing and addressing security vulnerabilities in computer vision.