STQuant: Spatio-Temporal Adaptive Framework for Optimizer Quantization in Large Multimodal Model Training

arXiv cs.LG / 4/9/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces STQuant, a spatio-temporal adaptive quantization framework for large multimodal model training that varies optimizer-state precision across layers, optimizer variables, and training steps instead of using fixed bit-width policies.
  • It argues that naïve dynamic quantization is difficult because optimizer states are numerically sensitive and because jointly adapting multiple factors creates a combinatorial search problem.
  • STQuant addresses these issues with a provably near-optimal factor-selection strategy to identify the most influential precision-adaptation factors and a dynamic transition decision algorithm that reduces search complexity from exponential to linear.
  • Experiments on GPT-2 and ViT report an 84.4% reduction in optimizer-state memory and an average bit-width as low as 5.1 bits while maintaining model quality.
  • The method is designed to be practical for distributed training, adding only O(N/K) computational overhead and requiring O(1) extra memory.

Abstract

Quantization is an effective way to reduce the memory cost of large-scale model training. However, most existing methods adopt fixed-precision policies, which ignore the fact that optimizer-state distributions vary significantly across layers and training steps. Such uniform designs often introduce noticeable accuracy degradation. To move beyond fixed quantization, we propose STQuant, a distributed training framework that reduces the memory footprint of optimizer states via dynamic precision allocation across layers, state variables, and training steps, while maintaining model quality. Naively applying dynamic quantization during training is challenging for two reasons. First, optimizer states are numerically sensitive, and quantization noise can destabilize quality. Second, jointly considering multiple states and layers induces a large combinatorial search space. STQuant addresses these challenges with two key techniques: 1) a provably near-optimal factor selection strategy that accurately identifies the most influential factors for precision adaptation. 2) a dynamic transition decision algorithm that reduces the search cost from exponential to linear complexity. Experiments on GPT-2 and ViT show that STQuant reduces optimizer-state memory by 84.4%, achieving an average bit-width of as low as 5.1 bits, compared with existing solutions. Moreover, STQuant incurs only O(N/K) computational overhead and requires O(1) extra space.