Transformers in the Dark: Navigating Unknown Search Spaces via Bandit Feedback

arXiv cs.LG / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines whether Transformers/LLMs can approximate external tree-search algorithms, reducing the need for an external search component in LLM problem solving.
  • It proposes a benchmark framework called “unknown tree search with bandit feedback,” where tree extensions and feedback signals are externally specified for controlled evaluation.
  • Results indicate that Transformers are theoretically expressive enough to implement distinct search strategies and that models can be trained from scratch to approximate them.
  • The authors show the learned Transformers may generalize to unseen scenarios (e.g., longer horizons or deeper trees) beyond the training conditions.
  • They also find that continued task-focused training (fine-tuning on search trajectories) can unlock the full capabilities of a pretrained LLM for search-like behavior.

Abstract

Effective problem solving with Large Language Models (LLMs) can be enhanced when they are paired with external search algorithms. By viewing the space of diverse ideas and their follow-up possibilities as a tree structure, the search algorithm can navigate such a search space and guide the LLM toward better solutions more efficiently. While the search algorithm enables an effective balance between exploitation and exploration of a tree-structured space, the need for an external component can complicate the overall problem-solving process. We therefore pose the following question: Can LLMs or their underlying Transformer architectures approximate a search algorithm? To answer this question, we first introduce a simplified framework in which tree extensions and feedback signals are externally specified, allowing for controlled evaluation of search capabilities. We call this setting unknown tree search with bandit feedback. Within this setting, we show that Transformers are theoretically expressive enough to implement distinct search strategies and can be trained from scratch to approximate those strategies. Our Transformer models exhibit the possibility of generalizing to unseen conditions such as longer horizons or deeper trees. Furthermore, we demonstrate that continued task-focused training unlocks the complete capabilities of a pretrained LLM, by fine-tuning the LLM on search trajectories.
広告