AI Navigate

Deriving Hyperparameter Scaling Laws via Modern Optimization Theory

arXiv cs.LG / 3/18/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper derives hyperparameter scaling laws for modern first-order optimizers by analyzing convergence bounds within the Linear Minimization Oracle (LMO) framework, covering optimizers like normalized SGD, signSGD, and Muon.
  • Treating these bounds as proxies, the authors obtain closed-form power-law schedules for learning rate, momentum, and batch size as functions of iteration or token budget.
  • With model size fixed, the analysis recovers known insights from the literature under a unified perspective and highlights the interaction between momentum and batch-size scaling.
  • The results indicate multiple viable scaling strategies for achieving optimal performance and outline directions for future research.

Abstract

Hyperparameter transfer has become an important component of modern large-scale training recipes. Existing methods, such as muP, primarily focus on transfer between model sizes, with transfer across batch sizes and training horizons often relying on empirical scaling rules informed by insights from timescale preservation, quadratic proxies, and continuous-time approximations. We study hyperparameter scaling laws for modern first-order optimizers through the lens of recent convergence bounds for methods based on the Linear Minimization Oracle (LMO), a framework that includes normalized SGD, signSGD (approximating Adam), and Muon. Treating bounds in recent literature as a proxy and minimizing them across different tuning regimes yields closed-form power-law schedules for learning rate, momentum, and batch size as functions of the iteration or token budget. Our analysis, holding model size fixed, recovers most insights and observations from the literature under a unified and principled perspective, with clear directions open for future research. Our results draw particular attention to the interaction between momentum and batch-size scaling, suggesting that optimal performance may be achieved with several scaling strategies.