AI Navigate

What do near-optimal learning rate schedules look like?

arXiv cs.LG / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper designs a search procedure to find near-optimal learning-rate schedule shapes within a parameterized family and factors out the base learning rate to enable fair comparisons.
  • It evaluates the approach on three workloads—linear regression, CIFAR-10 image classification, and Wikitext-103 language modeling—finding near-optimal schedules in practice.
  • The results show warmup and decay are robust features of good schedules, while commonly used schedule families are not optimal for these workloads.
  • Weight decay can strongly affect the optimal schedule shape, revealing important interactions between hyperparameters.
  • The authors claim these results constitute some of the most comprehensive findings on near-optimal schedule shapes for deep neural network training to date.

Abstract

A basic unanswered question in neural network training is: what is the best learning rate schedule shape for a given workload? The choice of learning rate schedule is a key factor in the success or failure of the training process, but beyond having some kind of warmup and decay, there is no consensus on what makes a good schedule shape. To answer this question, we designed a search procedure to find the best shapes within a parameterized schedule family. Our approach factors out the schedule shape from the base learning rate, which otherwise would dominate cross-schedule comparisons. We applied our search procedure to a variety of schedule families on three workloads: linear regression, image classification on CIFAR-10, and small-scale language modeling on Wikitext103. We showed that our search procedure indeed generally found near-optimal schedules. We found that warmup and decay are robust features of good schedules, and that commonly used schedule families are not optimal on these workloads. Finally, we explored how the outputs of our shape search depend on other optimization hyperparameters, and found that weight decay can have a strong effect on the optimal schedule shape. To the best of our knowledge, our results represent the most comprehensive results on near-optimal schedule shapes for deep neural network training, to date.