Optimal Projection-Free Adaptive SGD for Matrix Optimization
arXiv cs.LG / 4/6/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes an improved analytical framework for Leon, an online convex optimization method that avoids costly quadratic projections at every iteration.
- It shows that Leon’s preconditioner has stabilizing properties, which removes the need for an additional hyperparameter and enables stronger convergence guarantees.
- The authors introduce the first practical projection-free variant of One-sided Shampoo augmented with Nesterov acceleration, still avoiding per-iteration projections.
- They also provide improved dimension-independent rates for the non-smooth, non-convex setting and unify the analysis, yielding accelerated projection-free adaptive SGD with (block-)diagonal preconditioners.
Related Articles

Black Hat Asia
AI Business

How Bash Command Safety Analysis Works in AI Systems
Dev.to

How I Built an AI Agent That Earns USDC While I Sleep — A Complete Guide
Dev.to

How to Get Better Output from AI Tools (Without Burning Time and Tokens)
Dev.to

How I Added LangChain4j Without Letting It Take Over My Spring Boot App
Dev.to