Diagonal-Tiled Mixed-Precision Attention for Efficient Low-Bit MXFP Inference

arXiv cs.LG / 4/7/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • The paper introduces a new low-bit, mixed-precision attention kernel for transformer/LLM inference using the MXFP (microscaling floating-point) format to address attention’s quadratic cost and memory bandwidth limits.
  • It proposes “Diagonal-Tiled Mixed-Precision Attention (DMA),” which applies two low-bit computation modes at the tile level and is implemented as a fused Triton kernel to improve hardware parallelism and memory efficiency.
  • Experiments on NVIDIA B200 GPUs show negligible quality degradation in text generation while achieving notable speedups attributable to kernel fusion.
  • The authors provide released code via GitHub, enabling practitioners to adopt and benchmark the kernel in their own inference stacks.

Abstract

Transformer-based large language models (LLMs) have demonstrated remarkable performance across a wide range of real-world tasks, but their inference cost remains prohibitively high due to the quadratic complexity of attention and the memory bandwidth limitations of high-precision operations. In this work, we present a low-bit mixed-precision attention kernel using the microscaling floating-point (MXFP) data format, utilizing the computing capability on next-generation GPU architectures. Our Diagonal-Tiled Mixed-Precision Attention (DMA) incorporates two kinds of low-bit computation at the tiling-level, and is a delicate fused kernel implemented using Triton, exploiting hardware-level parallelism and memory efficiency to enable fast and efficient inference without compromising model performance. Extensive empirical evaluations on NVIDIA B200 GPUs show that our kernel maintains generation quality with negligible degradation, and meanwhile achieves significant speedup by kernel fusion. We release our code at https://github.com/yifu-ding/MP-Sparse-Attn.