Adaptive multi-fidelity optimization with fast learning rates

arXiv stat.ML / 4/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses multi-fidelity optimization where learners use biased approximations of different costs, requiring an explicit tradeoff between evaluation cost and approximation bias under a limited budget.
  • It derives theoretical lower bounds for the simple regret under several assumptions about the fidelity structure, expressed via a cost-to-bias relationship.
  • The authors introduce the Kometo algorithm, which matches the optimal regret rates up to extra logarithmic factors while not requiring prior knowledge of the function’s smoothness or the fidelity assumptions.
  • The study reports empirical results showing Kometo outperforms prior multi-fidelity optimization methods, particularly when problem-specific parameters are unknown.
  • Overall, the contribution combines rigorous regret-rate theory with a practical algorithm design that is robust to missing smoothness/fidelity information.

Abstract

In multi-fidelity optimization, biased approximations of varying costs of the target function are available. This paper studies the problem of optimizing a locally smooth function with a limited budget, where the learner has to make a tradeoff between the cost and the bias of these approximations. We first prove lower bounds for the simple regret under different assumptions on the fidelities, based on a cost-to-bias function. We then present the Kometo algorithm which achieves, with additional logarithmic factors, the same rates without any knowledge of the function smoothness and fidelity assumptions, and improves previously proven guarantees. We finally empirically show that our algorithm outperforms previous multi-fidelity optimization methods without the knowledge of problem-dependent parameters.