Regret Analysis of Sleeping Competing Bandits

arXiv cs.LG / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Sleeping Competing Bandits, extending competing bandits to settings where the availability of players and arms can change over time.
  • It redefines regret for this setting and derives matching upper and lower bounds: an asymptotic upper bound of O(NK log Ti / Δ^2) and a lower bound of Ω(N(K−N+1) log Ti / Δ^2) under the same assumptions.
  • An algorithm is proposed that achieves the asymptotic upper bound, and is asymptotically optimal in the regime where K is relatively larger than N.
  • The work sits at the intersection of online learning and game theory and has implications for dynamic, real-world decision problems with fluctuating participation and resources.

Abstract

The Competing Bandits framework is a recently emerging area that integrates multi-armed bandits in online learning with stable matching in game theory. While conventional models assume that all players and arms are constantly available, in real-world problems, their availability can vary arbitrarily over time. In this paper, we formulate this setting as Sleeping Competing Bandits. To analyze this problem, we naturally extend the regret definition used in existing competing bandits and derive regret bounds for the proposed model. We propose an algorithm that simultaneously achieves an asymptotic regret bound of \mathrm{O}\left(NK\log T_{i}/\Delta^2\right) under reasonable assumptions, where N is the number of players, K is the number of arms, T_{i} is the number of rounds of each player p_i, and \Delta is the minimum reward gap. We also provide a regret lower bound of \mathrm{\Omega}\left( N(K-N+1)\log T_{i}/\Delta^2 \right) under the same assumptions. This implies that our algorithm is asymptotically optimal in the regime where the number of arms K is relatively larger than the number of players N.