A Finite Time Analysis of Thompson Sampling for Bayesian Optimization with Preferential Feedback

arXiv cs.LG / 4/29/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a Thompson Sampling-based method for Bayesian optimization when feedback arrives as pairwise preference comparisons instead of scalar scores.
  • It models pairwise comparisons using a monotone link over latent utility differences and builds on a dueling kernel derived from a base kernel.
  • The authors prove a finite-time performance guarantee, showing that the proposed preferential-feedback method can achieve performance comparable to standard Thompson Sampling for scalar-feedback Bayesian optimization.
  • The analysis uses properties like anchor invariance for challenger selection and proposes a double-TS pairing variant, with empirical validation on both synthetic and real-world problems.

Abstract

Preference feedback, in the form of pairwise comparisons rather than scalar scores, has seen increasing use in applications such as human-, laboratory-, and expert-in-the-loop design, as well as scientific discovery. We propose a Thompson Sampling (TS) approach to Bayesian optimization with preferential feedback that models comparisons using a monotone link on latent utility differences and leverages the dueling kernel induced by a base kernel. We provide a finite-time analysis showing that the performance of the proposed method matches that of standard TS for conventional Bayesian optimization with scalar feedback. The analysis exploits the anchor invariance of TS for challenger selection and introduces a double-TS pairing variant. We also demonstrate the performance of the method on both synthetic and real-world examples.