Self-Normalized Martingales and Uniform Regret Bounds for Linear Regression

arXiv stat.ML / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies self-normalized martingale inequalities used to build confidence ellipsoids for online least squares, showing that common existing bounds are not truly scale-invariant due to assumptions like bounded covariates and explicit regularization matrices.
  • It proves a sharp dimension-dependent result: nontrivial scale-invariant self-normalized upper bounds are possible without extra assumptions only in one dimension (d=1), where O(log T) bounds can be achieved without restrictions on covariates.
  • For higher dimensions (d>1), the authors show that no nontrivial scale-invariant bound can hold in full generality, establishing a fundamental impossibility.
  • By linking this dichotomy to “doubly-uniform” regret in online linear regression (independent of both covariate scale and comparator norm), the work resolves an open question from ALT 2019: O(log T) doubly-uniform regret is achievable in d=1, while sublinear doubly-uniform regret is impossible for d>1.
  • Under an additional smoothness assumption (bounded Radon–Nikodym derivatives of conditional covariate laws), the paper recovers sublinear regret for d>1 and derives a scale-invariant self-normalized concentration inequality without the usual regularization penalties, for adaptive non-i.i.d. vector martingales.

Abstract

Self-normalized martingale inequalities lie at the heart of confidence ellipsoids for online least squares and, more broadly, many bandit and reinforcement-learning results. Yet existing vector and scalar results typically rely on bounded covariates and an explicit regularization matrix, producing bounds that are \emph{not scale-invariant}: although the self-normalized quantity is scale-invariant by definition, its standard upper bounds are not. We characterize when scale-invariant upper bounds on self-normalized martingales are possible. Without further assumptions, we prove that nontrivial scale-invariant bounds exist only in dimension d=1; moreover, in d=1 we obtain O(\log T) scale-invariant self-normalized bounds without any assumptions on the covariates. In contrast, for d>1 we show that no nontrivial scale-invariant bound can hold in full generality. We then connect this dichotomy to \emph{doubly-uniform} regret in online linear regression (i.e., regret bounds that are simultaneously independent of the covariate scale and the comparator norm) and use it to resolve the open question of Gaillard, Gerchinovitz, Huard, and Stoltz, \emph{``Uniform regret bounds over \mathbb{R}^d for the sequential linear regression problem with the square loss''} (ALT 2019): in d=1 we give an explicit algorithm with O(\log T) doubly-uniform regret, whereas for d>1 sublinear doubly-uniform regret is impossible. Finally, under a natural \emph{smoothness} condition (bounded Radon--Nikodym derivatives of the conditional covariate laws with respect to a fixed base measure), we recover sublinear regret for d>1 without bounded covariates and derive a self-normalized concentration inequality free of the usual regularization penalties, yielding arguably a first natural scale-invariant bound for adaptive, non-i.i.d. vector martingales.