What do near-optimal learning rate schedules look like?
arXiv cs.LG / 3/12/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper designs a search procedure to find near-optimal learning-rate schedule shapes within a parameterized family and factors out the base learning rate to enable fair comparisons.
- It evaluates the approach on three workloads—linear regression, CIFAR-10 image classification, and Wikitext-103 language modeling—finding near-optimal schedules in practice.
- The results show warmup and decay are robust features of good schedules, while commonly used schedule families are not optimal for these workloads.
- Weight decay can strongly affect the optimal schedule shape, revealing important interactions between hyperparameters.
- The authors claim these results constitute some of the most comprehensive findings on near-optimal schedule shapes for deep neural network training to date.
Related Articles
I Was Wrong About AI Coding Assistants. Here's What Changed My Mind (and What I Built About It).
Dev.to

Interesting loop
Reddit r/LocalLLaMA
Qwen3.5-122B-A10B Uncensored (Aggressive) — GGUF Release + new K_P Quants
Reddit r/LocalLLaMA
A supervisor or "manager" Al agent is the wrong way to control Al
Reddit r/artificial
FeatherOps: Fast fp8 matmul on RDNA3 without native fp8
Reddit r/LocalLLaMA