Learning Rate Engineering: From Coarse Single Parameter to Layered Evolution

arXiv cs.AI / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper traces learning-rate scheduling’s evolution across five generations, from global fixed SGD rates to joint layer-and-time scheduling that adapts updates by depth and training phase.
  • It explains the motivation for finer-grained scheduling through the “impossible trinity” in transfer learning: lower layers need small changes to retain general features while higher layers require larger updates to learn new tasks.
  • The authors introduce Discriminative Adaptive Layer Scaling (DALS), combining phase-adaptive cosine scheduling, depth-aware Grokfast-style gradient filtering, and LARS-like trust ratios into one optimizer framework.
  • Benchmarks across 18 learning-rate/optimizer strategies (including DALS variants) on synthetic data, CIFAR-10 (from scratch), RTE, TREC-6, and IMDb (fine-tuning) show DALS delivers the best synthetic accuracy (98.0%), while DALS-Fast reaches 90% in 3 epochs.
  • Cross-dataset results reveal regime-dependent winners and highlight that some directional-decay methods (e.g., STLR+Discriminative/ULMFiT) can catastrophically fail on from-scratch tasks without pretrained representations.

Abstract

Learning rate scheduling has evolved from the single global fixed rate of early SGD to sophisticated layer-wise adaptive strategies. We systematize this evolution into five generations: (Gen1) global fixed learning rates, (Gen2) global scheduling, (Gen3) parameter-level adaptation, (Gen4) layer-level differentiation, and (Gen5) joint layer-time scheduling. We trace the fundamental motivation behind each transition, showing how the shift from one-size-fits-all to tailoring by layer and time addresses the impossible trinity of transfer learning: lower layers require small updates to preserve general knowledge while higher layers need large updates to adapt to new tasks. Building on this taxonomy, we propose Discriminative Adaptive Layer Scaling (DALS), a unified framework that integrates phase-adaptive cosine scheduling, depth-aware Grokfast gradient filtering, and LARS-style trust ratios into a single coherent optimizer. We benchmark 18 strategies including three DALS variants across all five generations on five datasets: synthetic, CIFAR-10 (from scratch), RTE, TREC-6, and IMDb (fine-tuning). On synthetic, DALS achieves the best accuracy at 98.0%, while DALS-Fast reaches 90% in just 3 epochs. The cross-dataset analysis reveals striking regime-dependent patterns -- no single strategy wins across all regimes. Critically, STLR+Discriminative, the ULMFiT champion, catastrophically fails on from-scratch tasks (43.6% on TREC-6 from scratch vs. 96.8% with RAdam), confirming that directional decay biases are harmful without pretrained features. DALS avoids either extreme, achieving the best synthetic result while maintaining competitive fine-tuning performance.