Discrete Cosine Transform Based Decorrelated Attention for Vision Transformers

arXiv cs.CV / 5/4/2026

💬 OpinionModels & Research

Key Points

  • The paper proposes using Discrete Cosine Transform (DCT) structure to improve Vision Transformers by tackling the difficult and costly initialization of query/key/value projection weights from random starts.
  • It introduces a DCT-coefficient-based initialization method for self-attention that preserves structure and yields consistent classification accuracy improvements on CIFAR-10 and ImageNet-1K.
  • The authors also present a DCT-based attention compression approach that truncates high-frequency DCT components of input patches to exploit frequency-domain decorrelation and reduce projection dimensionality.
  • Experiments on Swin Transformer show that the compression significantly cuts computational overhead while keeping performance comparable to baselines.

Abstract

Self-attention is central to the success of Transformer architectures; however, learning the query, key, and value projections from random initialization remains challenging and computationally expensive. In this paper, we propose two complementary methods that leverage the Discrete Cosine Transform (DCT) to enhance the efficiency and performance of Vision Transformers. First, we address the initialization problem by introducing a simple yet effective DCT-based initialization strategy for self-attention, where projection weights are initialized using DCT coefficients. This structure-preserving approach consistently improves classification accuracy on the CIFAR-10 and ImageNet-1K benchmarks. Second, we propose a DCT-based attention compression technique that exploits the decorrelation properties of the frequency domain. By observing that high-frequency DCT coefficients typically correspond to noise, we truncate high-frequency components of the input patches, thereby reducing the dimensionality of the query, key, and value projections without sacrificing accuracy. Experiments on Swin Transformer models demonstrate that the proposed compression method achieves a substantial reduction in computational overhead while maintaining comparable performance.