AI Navigate

Online Semi-infinite Linear Programming: Efficient Algorithms via Function Approximation

arXiv cs.LG / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The work formulates Online Semi-infinite Linear Programming (OSILP) and uses function approximation to compress potentially infinite constraints into a fixed constant q.
  • It introduces a dual-based algorithm with regret bounds of O(q sqrt(T)) under stochastic inputs and O((q + q log T) sqrt(T)) under random permutation, notably independent of the total number of constraints.
  • The authors also propose a two-stage algorithm that, under stronger assumptions, achieves O(q log T + q/epsilon) regret and extends the approach to general function settings.
  • Empirical results show the proposed methods outperform existing online LP methods when the constraint set is large or infinite.

Abstract

We consider the dynamic resource allocation problem where the decision space is finite-dimensional, yet the solution must satisfy a large or even infinite number of constraints revealed via streaming data or oracle feedback. We model this challenge as an Online Semi-infinite Linear Programming (OSILP) problem and develop a novel LP formulation to solve it approximately. Specifically, we employ function approximation to reduce the number of constraints to a constant q. This addresses a key limitation of traditional online LP algorithms, whose regret bounds typically depend on the number of constraints, leading to poor performance in this setting. We propose a dual-based algorithm to solve our new formulation, which offers broad applicability through the selection of appropriate potential functions. We analyze this algorithm under two classical input models-stochastic input and random permutation-establishing regret bounds of O(q\sqrt{T}) and O\left(\left(q+q\log{T})\sqrt{T}\right)\right) respectively. Note that both regret bounds are independent of the number of constraints, which demonstrates the potential of our approach to handle a large or infinite number of constraints. Furthermore, we investigate the potential to improve upon the O(q\sqrt{T}) regret and propose a two-stage algorithm, achieving O(q\log{T} + q/\epsilon) regret under more stringent assumptions. We also extend our algorithms to the general function setting. A series of experiments validates that our algorithms outperform existing methods when confronted with a large number of constraints.