AI Navigate

HAViT: Historical Attention Vision Transformer

arXiv cs.CV / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • HavIT proposes cross-layer attention propagation by preserving and integrating historical attention matrices across encoder layers to refine inter-layer information flow in Vision Transformers.
  • The approach requires minimal architectural changes, adding only attention matrix storage and blending operations.
  • Experiments on CIFAR-100 and TinyImageNet show consistent accuracy gains across ViT variants (CIFAR-100: 75.74% to 77.07%; TinyImageNet: 57.82% to 59.07%), with CaiT also improving by about 1%.
  • The study identifies an optimal blending hyperparameter (alpha = 0.45) and notes that random initialization enhances convergence; the code is publicly available on GitHub.

Abstract

Vision Transformers have excelled in computer vision but their attention mechanisms operate independently across layers, limiting information flow and feature learning. We propose an effective cross-layer attention propagation method that preserves and integrates historical attention matrices across encoder layers, offering a principled refinement of inter-layer information flow in Vision Transformers. This approach enables progressive refinement of attention patterns throughout the transformer hierarchy, enhancing feature acquisition and optimization dynamics. The method requires minimal architectural changes, adding only attention matrix storage and blending operations. Comprehensive experiments on CIFAR-100 and TinyImageNet demonstrate consistent accuracy improvements, with ViT performance increasing from 75.74% to 77.07% on CIFAR-100 (+1.33%) and from 57.82% to 59.07% on TinyImageNet (+1.25%). Cross-architecture validation shows similar gains across transformer variants, with CaiT showing 1.01% enhancement. Systematic analysis identifies the blending hyperparameter of historical attention (alpha = 0.45) as optimal across all configurations, providing the ideal balance between current and historical attention information. Random initialization consistently outperforms zero initialization, indicating that diverse initial attention patterns accelerate convergence and improve final performance. Our code is publicly available at https://github.com/banik-s/HAViT.