Adaptive Candidate Point Thompson Sampling for High-Dimensional Bayesian Optimization
arXiv cs.LG / 4/13/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses a key limitation of Thompson sampling in Bayesian optimization with Gaussian process surrogates: exact sampling over the maximizer is intractable and discretized candidate sets become exponentially sparse in high dimensions.
- It proposes Adaptive Candidate Thompson Sampling (ACTS), which increases effective candidate density by adaptively shrinking the sampling search space instead of relying solely on denser discretizations or scalable GP approximations.
- ACTS generates candidate points in lower-dimensional subspaces, using the gradient of a sampled surrogate function to guide where to search.
- The method is presented as a simple drop-in replacement for existing Thompson sampling variants (including trust-region/local-approximation approaches) while yielding better maximizer samples and improved optimization results.
- Experiments on both synthetic and real-world benchmarks indicate ACTS improves optimization performance over prior Thompson sampling strategies.
Related Articles

Emerging Properties in Unified Multimodal Pretraining
Dev.to

Build a Profit-Generating AI Agent with LangChain: A Step-by-Step Tutorial
Dev.to

Open source AI is winning — but here's why I still pay $2/month for Claude API
Dev.to

AI Agents Need Real Email Infrastructure
Dev.to

Beyond the Prompt: Why AI Agents Are Hitting the Deployment Wall
Dev.to