Tightening optimality gap with confidence through conformal prediction

arXiv stat.ML / 2026/3/24

💬 オピニオンIdeas & Deep AnalysisModels & Research

要点

  • The paper proposes a conformal prediction framework to tighten overly loose primal and dual bounds from constrained optimization solvers, improving practical usefulness for decision-making.
  • It incorporates selective inference to handle heteroskedasticity observed in bound quality, aiming to produce more reliable prediction intervals across varying conditions.
  • The method leverages the solvers’ existing certified validity of dual/primal bounds to maintain coverage guarantees while yielding narrower, more informative intervals.
  • Experiments on large-scale industrial optimization problems indicate the approach can achieve the same coverage more efficiently than baseline techniques.

Abstract

Decision makers routinely use constrained optimization technology to plan and operate complex systems like global supply chains or power grids. In this context, practitioners must assess how close a computed solution is to optimality in order to make operational decisions, such as whether the current solution is sufficient or whether additional computation is warranted. A common practice is to evaluate solution quality using dual bounds returned by optimization solvers. While these dual bounds come with certified guarantees, they are often too loose to be practically informative. To this end, this paper introduces a novel conformal prediction framework for tightening loose primal and dual bounds. The proposed method addresses the heteroskedasticity commonly observed in these bounds via selective inference, and further exploits their inherent certified validity to produce tighter, more informative prediction intervals. Finally, numerical experiments on large-scale industrial problems suggest that the proposed approach can provide the same coverage level more efficiently than baseline methods.