AI Navigate

Off-Policy Learning with Limited Supply

arXiv cs.LG / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes off-policy learning in contextual bandits under limited supply, showing that greedy methods can deplete items early and become suboptimal.
  • It provides theoretical results proving that better-performing policies exist in constrained settings and cannot be guaranteed by unconstrained greedy approaches.
  • It introduces Off-Policy learning with Limited Supply (OPLS), which ranks items by their relative advantage over other users to improve allocation efficiency.
  • Empirical experiments on synthetic and real-world datasets demonstrate that OPLS outperforms standard OPL methods in limited-supply scenarios.

Abstract

We study off-policy learning (OPL) in contextual bandits, which plays a key role in a wide range of real-world applications such as recommendation systems and online advertising. Typical OPL in contextual bandits assumes an unconstrained environment where a policy can select the same item infinitely. However, in many practical applications, including coupon allocation and e-commerce, limited supply constrains items through budget limits on distributed coupons or inventory restrictions on products. In these settings, greedily selecting the item with the highest expected reward for the current user may lead to early depletion of that item, making it unavailable for future users who could potentially generate higher expected rewards. As a result, OPL methods that are optimal in unconstrained settings may become suboptimal in limited supply settings. To address the issue, we provide a theoretical analysis showing that conventional greedy OPL approaches may fail to maximize the policy performance, and demonstrate that policies with superior performance must exist in limited supply settings. Based on this insight, we introduce a novel method called Off-Policy learning with Limited Supply (OPLS). Rather than simply selecting the item with the highest expected reward, OPLS focuses on items with relatively higher expected rewards compared to the other users, enabling more efficient allocation of items with limited supply. Our empirical results on both synthetic and real-world datasets show that OPLS outperforms existing OPL methods in contextual bandit problems with limited supply.