Gradient-Variation Regret Bounds for Unconstrained Online Learning

arXiv stat.ML / 4/14/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes parameter-free algorithms for unconstrained online learning that guarantee regret scaling with the gradient-variation term \(V_T(u)\), defined via squared differences of successive gradients at the comparator \(u\).
  • For \(L\)-smooth convex losses, it provides fully adaptive regret bounds of order \(\widetilde{O}(\|u\|\sqrt{V_T(u)} + L\|u\|^2 + G^4)\) while not requiring prior knowledge of \(\|u\|\), \(G\), or \(L\).
  • Each online update admits an efficient closed-form computation, making the approach practical from a per-iteration complexity standpoint.
  • The results are extended to dynamic regret and yield immediate improvements for the stochastically-extended adversarial (SEA) model, surpassing the previous best result by Wang et al. (2025).

Abstract

We develop parameter-free algorithms for unconstrained online learning with regret guarantees that scale with the gradient variation V_T(u) = \sum_{t=2}^T \| abla f_t(u)- abla f_{t-1}(u)\|^2. For L-smooth convex loss, we provide fully-adaptive algorithms achieving regret of order \widetilde{O}(\|u\|\sqrt{V_T(u)} + L\|u\|^2+G^4) without requiring prior knowledge of comparator norm \|u\|, Lipschitz constant G, or smoothness L. The update in each round can be computed efficiently via a closed-form expression. Our results extend to dynamic regret and find immediate implications to the stochastically-extended adversarial (SEA) model, which significantly improves upon the previous best-known result [Wang et al., 2025].