Bayesian Optimization for Function-Valued Responses under Min-Max Criteria

arXiv stat.ML / 4/28/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper notes that Bayesian optimization often targets scalar outputs, but many scientific/engineering problems produce smooth function-valued responses over an index like time or wavelength, making standard approaches insufficient.
  • It proposes Min-Max Functional Bayesian Optimization (MM-FBO), which directly minimizes the worst-case error over the entire functional domain rather than average (integrated) performance.
  • MM-FBO models functional responses using functional principal component analysis and fits Gaussian-process surrogates to the principal component scores, then employs an uncertainty-aware acquisition function that trades off worst-case exploitation and domain exploration.
  • The authors provide theoretical results including a discretization bound for the min-max objective and a consistency guarantee that the acquisition converges to the true min-max target as the surrogate improves.
  • Experiments on synthetic benchmarks and physics-inspired electromagnetic scattering and vapor phase infiltration cases show MM-FBO outperforms baseline methods and emphasizes the value of explicitly modeling functional uncertainty.

Abstract

Bayesian optimization is widely used for optimizing expensive black box functions, but most existing approaches focus on scalar responses. In many scientific and engineering settings the response is functional, varying smoothly over an index such as time or wavelength, which makes classical formulations inadequate. Existing methods often minimize integrated error, which captures average performance but neglects worst case deviations. To address this limitation we propose min-max Functional Bayesian Optimization (MM-FBO), a framework that directly minimizes the maximum error across the functional domain. Functional responses are represented using functional principal component analysis, and Gaussian process surrogates are constructed for the principal component scores. Building on this representation, MM-FBO introduces an integrated uncertainty acquisition function that balances exploitation of worst case expected error with exploration across the functional domain. We provide two theoretical guarantees: a discretization bound for the worst case objective, and a consistency result showing that as the surrogate becomes accurate and uncertainty vanishes, the acquisition converges to the true min-max objective. We validate the method through experiments on synthetic benchmarks and physics inspired case studies involving electromagnetic scattering by metaphotonic devices and vapor phase infiltration. Results show that MM-FBO consistently outperforms existing baselines and highlights the importance of explicitly modeling functional uncertainty in Bayesian optimization.