RUQuant: Towards Refining Uniform Quantization for Large Language Models

arXiv cs.CL / 4/7/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • 本論文は、LLMのポストトレーニング量子化(PTQ)における精度劣化の主因を「活性値分布が量子化区間内で非一様」であることに関連づけ、Lloyd-Max最適性条件に基づいて理論的に再検討しています。
  • 提案手法RUQuantは、活性値をブロックに分け、複合の直交行列(Householder反射とGivens回転)で一様にサンプリングしたターゲットベクトルへ写像する2段階変換を行います。
  • 2段階目ではTransformer出力の不一致を用いてグローバルなHouseholder反射を微調整し、量子化誤差をさらに低減します。
  • 実験では13B LLMで微調整なしでもW6A6でフル精度の99.8%、W4A4で97%の性能を約1分で達成し、さらに微調整版ではより高精度となることを示しています。

Abstract

The increasing size and complexity of large language models (LLMs) have raised significant challenges in deployment efficiency, particularly under resource constraints. Post-training quantization (PTQ) has emerged as a practical solution by compressing models without requiring retraining. While existing methods focus on uniform quantization schemes for both weights and activations, they often suffer from substantial accuracy degradation due to the non-uniform nature of activation distributions. In this work, we revisit the activation quantization problem from a theoretical perspective grounded in the Lloyd-Max optimality conditions. We identify the core issue as the non-uniform distribution of activations within the quantization interval, which causes the optimal quantization point under the Lloyd-Max criterion to shift away from the midpoint of the interval. To address this issue, we propose a two-stage orthogonal transformation method, RUQuant. In the first stage, activations are divided into blocks. Each block is mapped to uniformly sampled target vectors using composite orthogonal matrices, which are constructed from Householder reflections and Givens rotations. In the second stage, a global Householder reflection is fine-tuned to further minimize quantization error using Transformer output discrepancies. Empirical results show that our method achieves near-optimal quantization performance without requiring model fine-tuning: RUQuant achieves 99.8% of full-precision accuracy with W6A6 and 97% with W4A4 quantization for a 13B LLM, within approximately one minute. A fine-tuned variant yields even higher accuracy, demonstrating the effectiveness and scalability of our approach.