GradPower: Powering Gradients for Faster Language Model Pre-Training

arXiv stat.ML / 4/3/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • GradPower is proposed as a lightweight gradient-transformation method that applies a sign-power elementwise transform to gradients before passing them to an existing optimizer, requiring only a single-line code change.
  • Experiments show consistent improvements in terminal loss when using Adam with the technique (AdamPower) across multiple LLM architectures (including LLaMA and Qwen2MoE), model sizes (66M–2B), datasets (C4, OpenWebText), and learning-rate schedules.
  • The largest benefits are reported for modern mixture-of-experts training, especially with warmup-stable-decay learning-rate schedules.
  • The approach is claimed to be compatible with other optimizers (e.g., Muon) and includes theoretical analysis attributing performance gains to gradient-noise effects and the method’s underlying mechanism.

Abstract

We propose GradPower, a lightweight gradient-transformation technique for accelerating language model pre-training. Given a gradient vector g=(g_i)_i, GradPower first applies the elementwise sign-power transformation: \varphi_p(g)=({\rm sign}(g_i)|g_i|^p)_{i} for a fixed p>0, and then feeds the transformed gradient into a base optimizer. Notably, GradPower requires only a single-line code change and no modifications to the base optimizer's internal logic, including the hyperparameters. When applied to Adam (termed AdamPower), GradPower consistently achieves lower terminal loss across diverse architectures (LLaMA, Qwen2MoE), parameter scales (66M to 2B), datasets (C4, OpenWebText), and learning-rate schedules (cosine, warmup-stable-decay). The most pronounced gains are observed when training modern mixture-of-experts models with warmup-stable-decay schedules. GradPower also integrates seamlessly with other state-of-the-art optimizers, such as Muon, yielding further improvements. Finally, we provide theoretical analyses that reveal the underlying mechanism of GradPower and highlight the influence of gradient noise.