Diagonal-Tiled Mixed-Precision Attention for Efficient Low-Bit MXFP Inference
arXiv cs.LG / 4/7/2026
📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical UsageModels & Research
Key Points
- The paper introduces a new low-bit, mixed-precision attention kernel for transformer/LLM inference using the MXFP (microscaling floating-point) format to address attention’s quadratic cost and memory bandwidth limits.
- It proposes “Diagonal-Tiled Mixed-Precision Attention (DMA),” which applies two low-bit computation modes at the tile level and is implemented as a fused Triton kernel to improve hardware parallelism and memory efficiency.
- Experiments on NVIDIA B200 GPUs show negligible quality degradation in text generation while achieving notable speedups attributable to kernel fusion.
- The authors provide released code via GitHub, enabling practitioners to adopt and benchmark the kernel in their own inference stacks.
Related Articles

Black Hat USA
AI Business

Black Hat Asia
AI Business

Meta Superintelligence Lab Releases Muse Spark: A Multimodal Reasoning Model With Thought Compression and Parallel Agents
MarkTechPost

Chatbots are great at manipulating people to buy stuff, Princeton boffins find
The Register
I tested and ranked every ai companion app I tried and here's my honest breakdown
Reddit r/artificial