Practical Efficient Global Optimization is No-regret

arXiv stat.ML / 3/27/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies “practical” Efficient Global Optimization (EGO), a noise-free Bayesian optimization method that adds a positive nugget/jitter to the Gaussian process covariance matrix for numerical stability.
  • It presents the first cumulative regret upper bound for practical EGO and proves sublinear (no-regret) cumulative regret for commonly used kernels such as the squared exponential and Matérn kernels with ν>1/2.
  • The authors analyze how the nugget magnitude affects the regret bound, providing theoretical guidance relevant to how practitioners choose jitter.
  • Numerical experiments are included to validate the theoretical regret behavior of practical EGO and the nugget-related analysis.

Abstract

Efficient global optimization (EGO) is one of the most widely used noise-free Bayesian optimization algorithms.It comprises the Gaussian process (GP) surrogate model and expected improvement (EI) acquisition function. In practice, when EGO is applied, a scalar matrix of a small positive value (also called a nugget or jitter) is usually added to the covariance matrix of the deterministic GP to improve numerical stability. We refer to this EGO with a positive nugget as the practical EGO. Despite its wide adoption and empirical success, to date, cumulative regret bounds for practical EGO have yet to be established. In this paper, we present for the first time the cumulative regret upper bound of practical EGO. In particular, we show that practical EGO has sublinear cumulative regret bounds and thus is a no-regret algorithm for commonly used kernels including the squared exponential (SE) and Mat\'{e}rn kernels ( u>\frac{1}{2}). Moreover, we analyze the effect of the nugget on the regret bound and discuss the theoretical implication on its choice. Numerical experiments are conducted to support and validate our findings.