AI Navigate

BinaryAttention: One-Bit QK-Attention for Vision and Diffusion Transformers

arXiv cs.CV / 3/11/2026

Ideas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper introduces BinaryAttention, a novel 1-bit quantization method for QK-attention in vision and diffusion transformers, which significantly reduces computational complexity by using only the sign of queries and keys and replacing floating dot products with bitwise operations.
  • BinaryAttention includes a learnable bias to counteract information loss from binarization and employs quantization-aware training and self-distillation to maintain accuracy despite the aggressive 1-bit quantization.
  • The method achieves over twice the speed of FlashAttention2 on A100 GPUs while matching or surpassing the accuracy of full-precision attention on various vision and diffusion transformer benchmarks.
  • This approach offers a practical and highly efficient alternative to traditional full-precision attention mechanisms, potentially enabling faster training and inference for low-bit vision and diffusion models.
  • The authors have made their code and models publicly available, facilitating broader adoption and further research in efficient transformer architectures.

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.09582 (cs)
[Submitted on 10 Mar 2026]

Title:BinaryAttention: One-Bit QK-Attention for Vision and Diffusion Transformers

View a PDF of the paper titled BinaryAttention: One-Bit QK-Attention for Vision and Diffusion Transformers, by Chaodong Xiao and 2 other authors
View PDF HTML (experimental)
Abstract:Transformers have achieved widespread and remarkable success, while the computational complexity of their attention modules remains a major bottleneck for vision tasks. Existing methods mainly employ 8-bit or 4-bit quantization to balance efficiency and accuracy. In this paper, with theoretical justification, we indicate that binarization of attention preserves the essential similarity relationships, and propose BinaryAttention, an effective method for fast and accurate 1-bit qk-attention. Specifically, we retain only the sign of queries and keys in computing the attention, and replace the floating dot products with bit-wise operations, significantly reducing the computational cost. We mitigate the inherent information loss under 1-bit quantization by incorporating a learnable bias, and enable end-to-end acceleration. To maintain the accuracy of attention, we adopt quantization-aware training and self-distillation techniques, mitigating quantization errors while ensuring sign-aligned similarity. BinaryAttention is more than 2x faster than FlashAttention2 on A100 GPUs. Extensive experiments on vision transformer and diffusion transformer benchmarks demonstrate that BinaryAttention matches or even exceeds full-precision attention, validating its effectiveness. Our work provides a highly efficient and effective alternative to full-precision attention, pushing the frontier of low-bit vision and diffusion transformers. The codes and models can be found at this https URL.
Comments:
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2603.09582 [cs.CV]
  (or arXiv:2603.09582v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.09582
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Chaodong Xiao [view email]
[v1] Tue, 10 Mar 2026 12:31:54 UTC (1,054 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CV
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.