Lipschitz Dueling Bandits over Continuous Action Spaces

arXiv cs.LG / 4/2/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies stochastic dueling bandits in continuous action spaces under a Lipschitz structure with purely comparative feedback, a combination previously not explored.
  • It introduces the first round-based algorithm for Lipschitz dueling bandits that uses exploration and recursive elimination of regions guided by an adaptive reference arm.
  • The authors develop new analysis tools tailored to relative (dueling) feedback and prove a regret bound of \(\tilde O\left(T^{\frac{d_z+1}{d_z+2}}\right)\), depending on the zooming dimension of the near-optimal region.
  • The method also achieves logarithmic space complexity in the time horizon, which the authors claim is optimal among bandit algorithms for continuous action spaces.

Abstract

We study for the first time, stochastic dueling bandits over continuous action spaces with Lipschitz structure, where feedback is purely comparative. While dueling bandits and Lipschitz bandits have been studied separately, their combination has remained unexplored. We propose the first algorithm for Lipschitz dueling bandits, using round-based exploration and recursive region elimination guided by an adaptive reference arm. We develop new analytical tools for relative feedback and prove a regret bound of \tilde O\left(T^{\frac{d_z+1}{d_z+2}}\right), where d_z is the zooming dimension of the near-optimal region. Further, our algorithm takes only logarithmic space in terms of the total time horizon, best achievable by any bandit algorithm over a continuous action space.