AGoQ: Activation and Gradient Quantization for Memory-Efficient Distributed Training of LLMs

arXiv cs.CL / 5/4/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper proposes AGoQ, a quantization approach aimed at lowering GPU memory use during LLM distributed training, especially when targeting ~4-bit activations and 8-bit gradients.
  • AGoQ uses a layer-aware activation quantization scheme that assigns different bit-widths to activation tensors based on layer type and pipeline stage to maintain training quality.
  • For gradients, it introduces an 8-bit gradient storage method combined with precision-preserving 8-bit All-Reduce to reduce both memory footprint and communication time.
  • Experiments on two GPU clusters (up to 64 GPUs) across multiple LLM sizes show AGoQ can cut memory usage by up to 52% and improve training speed by up to 1.34× versus systems including Megatron-LM (with/without ZeRO), COAT, and DeepSpeed.
  • The reported results indicate AGoQ maintains convergence during pretraining and yields comparable accuracy on downstream tasks for LLaMA-style model architectures up to 8B–32B parameters.

Abstract

Quantization is a key method for reducing the GPU memory requirement of training large language models (LLMs). Yet, current approaches are ineffective for 4-bit activations and 8-bit gradients, which would easily cause slow convergence or accuracy loss. To address this, we introduce AGoQ, incorporating two new techniques: 1) a layer-aware activation quantization algorithm that allocates appropriate bit-widths for activations of various layers based on their types and pipeline stages to achieve near 4-bit activation storage, and 2) a gradient quantization algorithm that reduces memory usage and shortens communication time by employing 8-bit gradient storage and precision-preserving 8-bit All-Reduce communication. We conduct extensive experiments using different sizes of LLMs on two GPU clusters (up to 64 GPUs), and the experimental results show that our AGoQ reduces the memory by up to 52\% and achieves up to 1.34\times improvement of training speed compared to state-of-the-art training systems Megatron-LM (w/ or w/o ZeRO), COAT and DeepSpeed with 8B to 32B LLaMA models, while achieving convergence loss on pretraining and comparable accuracy on downstream tasks with LLaMA architectures.