Abstract
We develop parameter-free algorithms for unconstrained online learning with regret guarantees that scale with the gradient variation V_T(u) = \sum_{t=2}^T \|
abla f_t(u)-
abla f_{t-1}(u)\|^2. For L-smooth convex loss, we provide fully-adaptive algorithms achieving regret of order \widetilde{O}(\|u\|\sqrt{V_T(u)} + L\|u\|^2+G^4) without requiring prior knowledge of comparator norm \|u\|, Lipschitz constant G, or smoothness L. The update in each round can be computed efficiently via a closed-form expression. Our results extend to dynamic regret and find immediate implications to the stochastically-extended adversarial (SEA) model, which significantly improves upon the previous best-known result [Wang et al., 2025].