Optimal Projection-Free Adaptive SGD for Matrix Optimization

arXiv cs.LG / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes an improved analytical framework for Leon, an online convex optimization method that avoids costly quadratic projections at every iteration.
  • It shows that Leon’s preconditioner has stabilizing properties, which removes the need for an additional hyperparameter and enables stronger convergence guarantees.
  • The authors introduce the first practical projection-free variant of One-sided Shampoo augmented with Nesterov acceleration, still avoiding per-iteration projections.
  • They also provide improved dimension-independent rates for the non-smooth, non-convex setting and unify the analysis, yielding accelerated projection-free adaptive SGD with (block-)diagonal preconditioners.

Abstract

Recently, Jiang et al. [2026] developed Leon, a practical variant of One-sided Shampoo [Xie et al., 2025a, An et al., 2025] algorithm for online convex optimization, which does not require computing a costly quadratic projection at each iteration. Unfortunately, according to the existing analysis, Leon requires tuning an additional hyperparameter in its preconditioner and cannot achieve dimension-independent convergence guarantees for convex optimization problems beyond the bounded gradients assumption. In this paper, we resolve this issue by proving certain stability properties of Leon's preconditioner. Using our improved analysis, we show that tuning the extra hyperparameter can be avoided and, more importantly, develop the first practical variant of One-sided Shampoo with Nesterov acceleration, which does not require computing projections at each iteration. As a side contribution, we obtain improved dimension-independent rates in the non-smooth non-convex setting and develop a unified analysis of the proposed algorithm, which yields accelerated projection-free adaptive SGD with (block-)diagonal preconditioners.