AI Navigate

Causal Attribution via Activation Patching

arXiv cs.CV / 3/17/2026

📰 NewsModels & Research

Key Points

  • CAAP introduces causal attribution by patch-level activation patching, intervening on internal activations rather than using learned masks or synthetic perturbations to estimate patch influence.
  • For each patch, the method inserts the corresponding source-image activations into a neutral target context across intermediate layers and uses the resulting target-class score as the attribution signal.
  • The approach aims to capture the causal contribution of patch-specific internal representations, avoiding late-layer global mixing that can reduce spatial localization.
  • Empirical results show CAAP outperforms existing attribution methods across multiple ViT backbones and standard metrics, producing more faithful and localized attribution maps.

Abstract

Attribution methods for Vision Transformers (ViTs) aim to identify image regions that influence model predictions, but producing faithful and well-localized attributions remains challenging. Existing gradient-based and perturbation-based techniques often fail to isolate the causal contribution of internal representations associated with individual image patches. The key challenge is that class-relevant evidence is formed through interactions between patch tokens across layers, and input-level perturbations can be poor proxies for patch importance, since they may fail to reconstruct the internal evidence actually used by the model. We propose Causal Attribution via Activation Patching (CAAP), which estimates the contribution of individual image patches to the ViT's prediction by directly intervening on internal activations rather than using learned masks or synthetic perturbation patterns. For each patch, CAAP inserts the corresponding source-image activations into a neutral target context over an intermediate range of layers and uses the resulting target-class score as the attribution signal. The resulting attribution map reflects the causal effect of patch-associated internal representations on the model's prediction. The causal intervention serves as a principled measure of patch influence by capturing class-relevant evidence after initial representation formation, while avoiding late-layer global mixing that can reduce spatial specificity. Across multiple ViT backbones and standard metrics, CAAP significantly outperforms existing methods and produces more faithful and localized attributions.