Cognitive Alignment At No Cost: Inducing Human Attention Biases For Interpretable Vision Transformers

arXiv cs.CV / 4/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies whether fine-tuning Vision Transformers’ self-attention weights on human saliency fixation maps can reduce the gap between ViT attention and human attentional behavior.
  • Compared with a shuffled-control baseline, the tuned ViT-B/16 shows significant improvement across five saliency metrics and exhibits three human-like attention biases (e.g., shifting from anti-human large-object toward small-objects, strengthening animacy preference, and reducing extreme attention entropy).
  • Bayesian parity analysis indicates the attention-cognition alignment is achieved without sacrificing image classification performance on ImageNet, ImageNet-C, and ObjectNet.
  • Applying an analogous procedure to a ResNet-50 CNN instead worsened both alignment and accuracy, implying that the ViT’s modular self-attention helps decouple spatial priority from representational logic.
  • The authors conclude that biologically grounded priors can emerge as a “free” property from human-aligned attention, improving interpretability of transformer-based vision models.
  • arXiv:2604.20027v1 is a new preprint announcement of these findings, focused on interpretability via cognitive alignment in vision transformers.

Abstract

For state-of-the-art image understanding, Vision Transformers (ViTs) have become the standard architecture but their processing diverges substantially from human attentional characteristics. We investigate whether this cognitive gap can be shrunk by fine-tuning the self-attention weights of Google's ViT-B/16 on human saliency fixation maps. To isolate the effects of semantically relevant signals from generic human supervision, the tuned model is compared against a shuffled control. Fine-tuning significantly improved alignment across five saliency metrics and induced three hallmark human-like biases: tuning reversed the baseline's anti-human large-object bias toward small-objects, amplified the animacy preference and diminished extreme attention entropy. Bayesian parity analysis provides decisive to very-strong evidence that this cognitive alignment comes at no cost to the model's original classification performance on in- (ImageNet), corrupted (ImageNet-C) and out-of-distribution (ObjectNet) benchmarks. An equivalent procedure applied to a ResNet-50 Convolutional Neural Network (CNN) instead degraded both alignment and accuracy, suggesting that the ViT's modular self-attention mechanism is uniquely suited for dissociating spatial priority from representational logic. These findings demonstrate that biologically grounded priors can be instilled as a free emergent property of human-aligned attention, to improve transformer interpretability.