Abstract
The Normalized Transformer, or nGPT (arXiv:2410.01131) achieves impressive training speedups and does not require weight decay or learning rate warmup. However, despite having hyperparameters that explicitly scale with model size, we observe that nGPT does not exhibit learning rate transfer across model dimension and token horizon. To rectify this, we combine numerical experiments with a principled use of alignment exponents (arXiv:2407.05872) to revisit and modify the \muP approach to hyperparameter transfer (arXiv:2011.14522). The result is a novel nGPT parameterization we call
uGPT. Through extensive empirical validation, we find
uGPT exhibits learning rate transfer across width, depth, and token horizon.