Evolution of Optimization Methods: Algorithms, Scenarios, and Evaluations

arXiv cs.LG / 4/15/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper frames deep learning optimization as a trade-off between convergence speed, generalization quality, and computational efficiency, noting that first-order methods like SGD and Adam are often challenged at scale.
  • It highlights that large-scale training, differential privacy constraints, and distributed learning can expose shortcomings in standard optimizers, motivating renewed interest in second-order and zeroth-order approaches.
  • The authors argue the ecosystem lacks a unified framework that explains common principles and clarifies when each optimizer family is most appropriate.
  • They provide a retrospective analysis and comprehensive empirical evaluation of mainstream optimizers across varied architectures and training scenarios, distilling emerging trends and design trade-offs.
  • The work concludes with practical guidance for building more efficient, robust, and trustworthy optimization methods, alongside an open-source code release.

Abstract

Balancing convergence speed, generalization capability, and computational efficiency remains a core challenge in deep learning optimization. First-order gradient descent methods, epitomized by stochastic gradient descent (SGD) and Adam, serve as the cornerstone of modern training pipelines. However, large-scale model training, stringent differential privacy requirements, and distributed learning paradigms expose critical limitations in these conventional approaches regarding privacy protection and memory efficiency. To mitigate these bottlenecks, researchers explore second-order optimization techniques to surpass first-order performance ceilings, while zeroth-order methods reemerge to alleviate memory constraints inherent to large-scale training. Despite this proliferation of methodologies, the field lacks a cohesive framework that unifies underlying principles and delineates application scenarios for these disparate approaches. In this work, we retrospectively analyze the evolutionary trajectory of deep learning optimization algorithms and present a comprehensive empirical evaluation of mainstream optimizers across diverse model architectures and training scenarios. We distill key emerging trends and fundamental design trade-offs, pinpointing promising directions for future research. By synthesizing theoretical insights with extensive empirical evidence, we provide actionable guidance for designing next-generation highly efficient, robust, and trustworthy optimization methods. The code is available at https://github.com/APRIL-AIGC/Awesome-Optimizer.