Cognitive Alignment At No Cost: Inducing Human Attention Biases For Interpretable Vision Transformers
arXiv cs.CV / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies whether fine-tuning Vision Transformers’ self-attention weights on human saliency fixation maps can reduce the gap between ViT attention and human attentional behavior.
- Compared with a shuffled-control baseline, the tuned ViT-B/16 shows significant improvement across five saliency metrics and exhibits three human-like attention biases (e.g., shifting from anti-human large-object toward small-objects, strengthening animacy preference, and reducing extreme attention entropy).
- Bayesian parity analysis indicates the attention-cognition alignment is achieved without sacrificing image classification performance on ImageNet, ImageNet-C, and ObjectNet.
- Applying an analogous procedure to a ResNet-50 CNN instead worsened both alignment and accuracy, implying that the ViT’s modular self-attention helps decouple spatial priority from representational logic.
- The authors conclude that biologically grounded priors can emerge as a “free” property from human-aligned attention, improving interpretability of transformer-based vision models.
- arXiv:2604.20027v1 is a new preprint announcement of these findings, focused on interpretability via cognitive alignment in vision transformers.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to