BWLA: Breaking the Barrier of W1AX Post-Training Quantization for LLMs

arXiv cs.AI / 5/4/2026

💬 OpinionDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces BWLA, a post-training quantization framework designed to accelerate LLMs by using 1-bit weights while keeping activations at low precision (e.g., 6 bits) without sacrificing accuracy.
  • It addresses the key limitation of prior methods—activation “heavy tails”—by using an Orthogonal-Kronecker Transformation (OKT) learned via EM minimization to reshape weights and suppress problematic activation extremes.
  • BWLA further boosts quantizability and performance through Proximal SVD Projection (PSP), which applies lightweight low-rank refinement with minimal computational overhead.
  • Experiments on Qwen3-32B show a Wikitext2 perplexity of 11.92 with 6-bit activations (vs. 38 from prior SOTA), over 70% gains on five zero-shot tasks, and a 3.26× inference speedup, indicating practical value for LLM compression.

Abstract

Large language models (LLMs) have driven major progress in NLP, yet their substantial memory and compute demands still hinder practical deployment. Binarization can compress weights to 1 bit, fundamentally lowering compute and bandwidth cost. However, existing methods cannot address activation heavy tails and thus must keep activations in high precision, preventing true end-to-end acceleration. To overcome this limitation, we propose BWLA (Binarized Weights and Low-bit Activations), the first post-training quantization framework that preserves high accuracy while achieving 1-bit weight quantization together with low-bit activations (e.g., 6 bits). The Orthogonal-Kronecker Transformation (OKT) learns an orthogonal mapping via EM minimization, converting unimodal weights into symmetric bimodal forms while suppressing activation tails and incoherence. The Proximal SVD Projection (PSP) then performs lightweight low-rank refinement through proximal SVD projection, further enhancing quantizability with minimal overhead. On Qwen3-32B, BWLA reaches a Wikitext2 perplexity of 11.92 under 6-bit activations (vs. 38 from SOTA), improves five zero-shot tasks by more than 70%, and delivers 3.26 times inference speedup, demonstrating strong potential for real-world LLM compression and acceleration.