QFlash: Bridging Quantization and Memory Efficiency in Vision Transformer Attention

arXiv cs.LG / 4/29/2026

📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • FlashAttention gains efficiency from tiling, but its online softmax still uses floating-point math for stability, which limits the feasibility of fully integer (fully quantized) attention.
  • The paper identifies three key barriers to building an integer-only FlashAttention: overflow/scale explosion in tile-wise accumulation, slow shift-based exponential computation on GPUs, and quantization granularity constraints that require uniform scales for integer comparisons.
  • QFlash is introduced as an end-to-end integer FlashAttention design that computes softmax entirely in the integer domain and is implemented as a single Triton kernel.
  • Experiments across seven attention workloads from ViT, DeiT, and Swin show up to 6.73× speedups over I-ViT and up to 8.69× speedups on Swin, with 18.8% lower energy use than FP16 FlashAttention and no Top-1 accuracy loss on ViT/DeiT.

Abstract

FlashAttention improves efficiency through tiling, but its online softmax still relies on floating-point arithmetic for numerical stability, making full quantization difficult. We identify three main obstacles to integer-only FlashAttention: (1) scale explosion during tile-wise accumulation, (2) inefficient shift-based exponential operations on GPUs, and (3) quantization granularity constraints requiring uniform scales for integer comparison. To address these challenges, we propose \textit{QFlash}, an end-to-end integer FlashAttention design that performs softmax entirely in the integer domain and runs as a single Triton kernel. On seven attention workloads from ViT, DeiT, and Swin models, QFlash achieves up to 6.73\times speedup over I-ViT and up to 8.69\times speedup on Swin, while reducing energy consumption by 18.8\% compared to FP16 FlashAttention, without sacrificing Top-1 accuracy on ViT/DeiT and remaining competitive on Swin under per-tensor quantization. Our code is publicly available at https://github.com/EfficientCompLab/qflash.