Online combinatorial optimization with stochastic decision sets and adversarial losses

arXiv cs.LG / 4/29/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses sequential learning when the available actions are not fixed, but instead are stochastic “composite actions” whose components may be unavailable due to real-world failures or constraints.
  • It introduces learning algorithms derived from the Follow-The-Perturbed-Leader framework, tailored to multiple feedback models including full information, (semi-)bandit, and an intermediate restricted-information setting.
  • A key contribution is a new loss estimation method called “Counting Asleep Times,” designed to handle the learner’s partial observability when action availability changes over time.
  • The authors provide regret bounds for the studied settings and highlight that their results notably improve the best-known guarantees for an efficient stochastic sleeping-bandit algorithm.
  • Empirical evaluations further show that the proposed methods outperform existing approaches for these stochastic-availability problems.

Abstract

Most work on sequential learning assumes a fixed set of actions that are available all the time. However, in practice, actions can consist of picking subsets of readings from sensors that may break from time to time, road segments that can be blocked or goods that are out of stock. In this paper we study learning algorithms that are able to deal with stochastic availability of such unreliable composite actions. We propose and analyze algorithms based on the Follow-The-Perturbed-Leader prediction method for several learning settings differing in the feedback provided to the learner. Our algorithms rely on a novel loss estimation technique that we call Counting Asleep Times. We deliver regret bounds for our algorithms for the previously studied full information and (semi-)bandit settings, as well as a natural middle point between the two that we call the restricted information setting. A special consequence of our results is a significant improvement of the best known performance guarantees achieved by an efficient algorithm for the sleeping bandit problem with stochastic availability. Finally, we evaluate our algorithms empirically and show their improvement over the known approaches.