On the Peril of (Even a Little) Nonstationarity in Satisficing Regret Minimization

arXiv stat.ML / 4/28/2026

💬 OpinionModels & Research

Key Points

  • The paper analyzes how satisficing regret minimization behaves in nonstationary $K$-armed bandit problems, focusing on piecewise-stationary environments with $L$ stationary segments.
  • It proves that in the realizable, piecewise-stationary setting (with $L\ge2$), the optimal satisficing regret scales as $\Theta(L\log T)$, meaning regret must grow with the time horizon even for slight nonstationarity.
  • This scaling sharply contrasts with the fully stationary case ($L=1$), where a $T$-independent $\Theta(1)$ satisficing regret is achievable under realizability.
  • The authors introduce a new Fano-based analytical framework for nonstationary bandits, using a “post-interaction reference” construction that extends classical Fano methods and interactive Fano techniques for stationary bandits.
  • They additionally identify a special regime where constant satisficing regret becomes possible again.

Abstract

Motivated by the principle of satisficing in decision-making, we study satisficing regret guarantees for nonstationary K-armed bandits. We show that in the general realizable, piecewise-stationary setting with L stationary segments, the optimal regret is \Theta(L\log T) as long as L\geq 2. This stands in sharp contrast to the case of L=1 (i.e., the stationary setting), where a T-independent \Theta(1) satisficing regret is achievable under realizability. In other words, the optimal regret has to scale with T even if just a little nonstationarity presents. A key ingredient in our analysis is a novel Fano-based framework tailored to nonstationary bandits via a \emph{post-interaction reference} construction. This framework strictly extends the classical Fano method for passive estimation as well as recent interactive Fano techniques for stationary bandits. As a complement, we also discuss a special regime in which constant satisficing regret is again possible.

On the Peril of (Even a Little) Nonstationarity in Satisficing Regret Minimization | AI Navigate