Learning Rate Transfer in Normalized Transformers

arXiv cs.LG / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper reports that the Normalized Transformer (nGPT) achieves training speedups without weight decay or learning-rate warmup, but it fails to transfer an appropriate learning rate across model size (dimension) and token horizon.
  • To address this, the authors revisit the μP (mean-field theory) approach to hyperparameter transfer and modify it using alignment exponents, combining theory with numerical experiments.
  • The proposed result is a new nGPT parameterization called νGPT, designed to improve how learning-rate settings scale.
  • Extensive empirical evaluation shows that νGPT achieves learning-rate transfer across width, depth, and token horizon, indicating more reliable scaling behavior for training.
  • Overall, the work provides a concrete parameterization strategy to make transformer optimization hyperparameters more transferable across different model scales and sequence lengths.

Abstract

The Normalized Transformer, or nGPT (arXiv:2410.01131) achieves impressive training speedups and does not require weight decay or learning rate warmup. However, despite having hyperparameters that explicitly scale with model size, we observe that nGPT does not exhibit learning rate transfer across model dimension and token horizon. To rectify this, we combine numerical experiments with a principled use of alignment exponents (arXiv:2407.05872) to revisit and modify the \muP approach to hyperparameter transfer (arXiv:2011.14522). The result is a novel nGPT parameterization we call uGPT. Through extensive empirical validation, we find uGPT exhibits learning rate transfer across width, depth, and token horizon.