Parameter-Free Dynamic Regret for Unconstrained Linear Bandits

arXiv cs.LG / 3/30/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses dynamic regret minimization in unconstrained adversarial linear bandits where the learner competes against an arbitrary time-varying comparator sequence with only point-evaluation feedback.
  • It introduces a simple method that combines multiple bandit algorithm guarantees to automatically adapt to the comparator switching count \(S_T\) without requiring prior knowledge of \(S_T\).
  • The work claims the first linear-bandit algorithm achieving the optimal regret rate of \(\mathcal{O}(\sqrt{d(1+S_T)T})\) up to poly-logarithmic factors.
  • The result is positioned as resolving a long-standing open problem in this area, improving theoretically optimal performance for nonstationary environments.
  • By matching the optimal dependence on dimensionality \(d\), time horizon \(T\), and switching \(S_T\), the approach strengthens the theoretical toolkit for dynamic decision-making in adversarial settings.

Abstract

We study dynamic regret minimization in unconstrained adversarial linear bandit problems. In this setting, a learner must minimize the cumulative loss relative to an arbitrary sequence of comparators \boldsymbol{u}_1,\ldots,\boldsymbol{u}_T in \mathbb{R}^d, but receives only point-evaluation feedback on each round. We provide a simple approach to combining the guarantees of several bandit algorithms, allowing us to optimally adapt to the number of switches S_T = \sum_t\mathbb{I}\{\boldsymbol{u}_t eq \boldsymbol{u}_{t-1}\} of an arbitrary comparator sequence. In particular, we provide the first algorithm for linear bandits achieving the optimal regret guarantee of order \mathcal{O}\big(\sqrt{d(1+S_T) T}\big) up to poly-logarithmic terms without prior knowledge of S_T, thus resolving a long-standing open problem.