LBLLM: Lightweight Binarization of Large Language Models via Three-Stage Distillation

arXiv cs.LG / 4/22/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • LBLLM is a lightweight binarization/quantization framework designed to make large language models practical in resource-constrained environments by reducing model size and compute needs.
  • It uses a three-stage strategy: PTQ-based initialization, layer-wise distillation for binarized weights and related parameters (while keeping activations full precision), and then learning activation quantization factors to target 4-bit activations.
  • The approach explicitly decouples weight quantization from activation quantization to reduce interference, improving training stability and inference accuracy.
  • The authors report strong results after training with only 0.016B tokens on a single GPU, outperforming prior binarization methods on W2A4 settings across language modeling, commonsense QA, and language understanding.
  • The method aims to achieve extreme low-bit quantization without adding extra high-precision channels or certain PTQ-specific components (e.g., rotational matrices) used in some recent work.

Abstract

Deploying large language models (LLMs) in resource-constrained environments is hindered by heavy computational and memory requirements. We present LBLLM, a lightweight binarization framework that achieves effective W(1+1)A4 quantization through a novel three-stage quantization strategy. The framework proceeds as follows: (1) initialize a high-quality quantized model via PTQ; (2) quantize binarized weights, group-wise bitmaps, and quantization parameters through layer-wise distillation while keeping activations in full precision; and (3) training learnable activation quantization factors to dynamically quantize activations to 4 bits. This decoupled design mitigates interference between weight and activation quantization, yielding greater training stability and better inference accuracy. LBLLM, trained only using 0.016B tokens with a single GPU, surpasses existing state-of-the-art binarization methods on W2A4 quantization settings across tasks of language modeling, commonsense QA, and language understanding. These results demonstrate that extreme low-bit quantization of LLMs can be both practical and highly effective without introducing any extra high-precision channels or rotational matrices commonly used in recent PTQ-based works, offering a promising path toward efficient LLM deployment in resource-limited situations.