AI Navigate

Understanding Quantization of Optimizer States in LLM Pre-training: Dynamics of State Staleness and Effectiveness of State Resets

arXiv cs.LG / 3/18/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • It investigates quantizing optimizer states in low-precision EMA and shows such quantization can cause updates to map back to the same value, effectively stalling the state.
  • The study develops a predictive model of one-step stalling probabilities and describes how stalling accumulates over time after initialization.
  • It provides a mechanistic explanation for why resets of optimizer state help under low precision: when the quantized EMA becomes stale, resets can restore responsiveness.
  • A theory-guided method for selecting reset periods is derived, emphasizing optimal timing of resets rather than whether resets are beneficial.
  • Experiments in controlled simulations and LLM pre-training demonstrate that proper reset schedules recover performance lost to low-precision storage and significantly reduce memory usage.

Abstract

Quantizing optimizer states is becoming an important ingredient of memory-efficient large-scale pre-training, but the resulting optimizer dynamics remain only partially understood. We study low-precision exponential moving average (EMA) optimizer states and show how quantization can cause many nominal updates to round back to the same stored value, making the state effectively stale and slowing adaptation beyond what the nominal decay would suggest. We then develop a simple predictive model of stalling that estimates one-step stalling probabilities and characterizes how stalling builds up over time after the initialization. This perspective provides a mechanistic explanation for why optimizer-state resets help in low precision: once a quantized EMA becomes effectively stale, resetting it can temporarily restore responsiveness. Motivated by this picture, we derive a simple theory-guided method for choosing useful reset periods, showing that in low precision the key question is not only whether resets help, but when they should be applied. Experiments in controlled simulations and LLM pre-training show that suitable reset schedules recover the performance lost to low-precision state storage while substantially reducing optimizer-state memory.