CLion: Efficient Cautious Lion Optimizer with Enhanced Generalization

arXiv cs.LG / 4/17/2026

📰 NewsModels & Research

Key Points

  • The paper addresses a gap in the Lion optimizer’s literature by providing a generalization analysis based on algorithmic stability, deriving a generalization error of O(1/(N τ^T)).
  • It shows a notable connection that SignSGD achieves the same generalization error bound as Lion.
  • The authors propose an efficient “Cautious Lion” (CLion) optimizer that uses the sign function more cautiously to improve generalization performance.
  • The CLion optimizer is proven to have a better generalization rate of O(1/N) (vs. Lion’s O(1/(N τ^T))) and it also enjoys a fast convergence rate of O(√d / T^{1/4}) for nonconvex stochastic optimization under an ℓ1-norm condition.
  • Extensive numerical experiments support the claimed effectiveness of CLion.

Abstract

Lion optimizer is a popular learning-based optimization algorithm in machine learning, which shows impressive performance in training many deep learning models. Although convergence property of the Lion optimizer has been studied, its generalization analysis is still missing. To fill this gap, we study generalization property of the Lion via algorithmic stability based on the mathematical induction. Specifically, we prove that the Lion has a generalization error of O(\frac{1}{N\tau^T}), where N is training sample size, and \tau>0 denotes the smallest absolute value of non-zero element in gradient estimator, and T is the total iteration number. In addition, we obtain an interesting byproduct that the SignSGD algorithm has the same generalization error as the Lion. To enhance generalization of the Lion, we design a novel efficient Cautious Lion (i.e., CLion) optimizer by cautiously using sign function. Moreover, we prove that our CLion has a lower generalization error of O(\frac{1}{N}) than O(\frac{1}{N\tau^T}) of the Lion, since the parameter \tau generally is very small. Meanwhile, we study convergence property of our CLion optimizer, and prove that our CLion has a fast convergence rate of O(\frac{\sqrt{d}}{T^{1/4}}) under \ell_1-norm of gradient for nonconvex stochastic optimization, where d denotes the model dimension. Extensive numerical experiments demonstrate effectiveness of our CLion optimizer.