Globalized Adversarial Regret Optimization: Robust Decisions with Uncalibrated Predictions

arXiv cs.LG / 3/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that classical robust and regret optimization approaches break down when prediction errors are uncalibrated, often leading to vacuous guarantees or overly optimistic decisions compared with nominal solutions.
  • It proposes Globalized Adversarial Regret Optimization (GARO), which defines and controls adversarial regret as a worst-case-to-oracle performance gap uniformly over uncertainty set sizes.
  • GARO is designed to provide absolute or relative performance guarantees versus an oracle with full knowledge of prediction error, without requiring probabilistic calibration of the uncertainty sets.
  • The authors show GARO with a relative rate function generalizes Lepski’s adaptation method, and they derive exact tractable reformulations for affine worst-case costs with polyhedral norm uncertainty sets.
  • For more general cases, the paper presents a discretization and constraint-generation algorithm with convergence guarantees, supported by experiments showing improved worst-case vs mean out-of-sample trade-offs.

Abstract

Optimization problems routinely depend on uncertain parameters that must be predicted before a decision is made. Classical robust and regret formulations are designed to handle erroneous predictions and can provide statistical error bounds in simple settings. However, when predictions lack rigorous error bounds (as is typical of modern machine learning methods) classical robust models often yield vacuous guarantees, while regret formulations can paradoxically produce decisions that are more optimistic than even a nominal solution. We introduce Globalized Adversarial Regret Optimization (GARO), a decision framework that controls adversarial regret, defined as the gap between the worst-case cost and the oracle robust cost, uniformly across all possible uncertainty set sizes. By design, GARO delivers absolute or relative performance guarantees against an oracle with full knowledge of the prediction error, without requiring any probabilistic calibration of the uncertainty set. We show that GARO equipped with a relative rate function generalizes the classical adaptation method of Lepski to downstream decision problems. We derive exact tractable reformulations for problems with affine worst-case cost functions and polyhedral norm uncertainty sets, and provide a discretization and a constraint-generation algorithm with convergence guarantees for general settings. Finally, experiments demonstrate that GARO yields solutions with a more favorable trade-off between worst-case and mean out-of-sample performance, as well as stronger global performance guarantees.