BATQuant: Outlier-resilient MXFP4 Quantization via Learnable Block-wise Optimization
arXiv cs.CL / 3/18/2026
💬 OpinionModels & Research
Key Points
- BATQuant introduces a Block-wise Affine Transformation that confines rotations to MXFP4 granularity to prevent cross-block outlier propagation and preserve local quantization behavior.
- It relaxes orthogonality constraints and uses Global and Private Kronecker (GPK) decomposition to reduce parameter storage and runtime overhead.
- Block-wise Learnable Clipping is incorporated to suppress residual outliers and shape activation distributions more effectively.
- Extensive experiments on multimodal LLMs and LLMs show state-of-the-art results under aggressive W4A4KV16 quantization, recovering up to 96.43% of full-precision performance on multimodal benchmarks.
Related Articles
Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA