Mixture-Model Preference Learning for Many-Objective Bayesian Optimization

arXiv stat.ML / 3/31/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses preference-based many-objective Bayesian optimization challenges caused by a growing trade-off space and human value structures that vary by context.
  • It introduces a Bayesian framework that learns a small set of latent “preference archetypes” using a Dirichlet-process mixture, capturing uncertainty over both which archetypes apply and how strongly they are weighted.
  • For efficient optimization queries, it proposes hybrid query strategies that separately target (i) identifying the most relevant mode/archetype and (ii) resolving trade-offs within that mode.
  • The authors provide a regret guarantee under mild assumptions for their mixture-aware Bayesian optimization procedure.
  • Experiments on synthetic and real-world benchmarks show improved performance over standard baselines, and diagnostic tools uncover structure that regret metrics alone miss.

Abstract

Preference-based many-objective optimization faces two obstacles: an expanding space of trade-offs and heterogeneous, context-dependent human value structures. Towards this, we propose a Bayesian framework that learns a small set of latent preference archetypes rather than assuming a single fixed utility function, modelling them as components of a Dirichlet-process mixture with uncertainty over both archetypes and their weights. To query efficiently, we designing hybrid queries that target information about (i) mode identity and (ii) within-mode trade-offs. Under mild assumptions, we provide a simple regret guarantee for the resulting mixture-aware Bayesian optimization procedure. Empirically, our method outperforms standard baselines on synthetic and real-world many-objective benchmarks, and mixture-aware diagnostics reveal structure that regret alone fails to capture.