DORA Explorer: Improving the Exploration Ability of LLMs Without Training

arXiv cs.CL / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper finds that current LLM agent decoding and prompting approaches (including temperature-based sampling and prompting styles like Chain-of-Thought/Tree-of-Thought) do not provide enough diversity at the sequence/action level, leading to poor exploration and getting stuck in loops.
  • It analyzes LLM exploration through classic Multi-Armed Bandit (MAB) and the Text Adventure Learning Environment Suite (TALES), showing systematic shortcomings of existing strategies for robust exploration.
  • It proposes DORA Explorer, a training-free framework (Diversity-Oriented Ranking of Actions) that generates diverse action candidates, scores them with token log-probabilities, and selects actions using a tunable exploration parameter.
  • Experiments indicate DORA reaches UCB-competitive performance on MAB and delivers consistent gains on TALES, such as boosting Qwen2.5-7B in TextWorld from 29.2% to 45.5%.
  • The authors provide a public project page with the proposed method for further use and verification: https://dora-explore.github.io/.

Abstract

Despite the rapid progress, LLMs for sequential decision-making (i.e., LLM agents) still struggle to produce diverse outputs. This leads to insufficient exploration, convergence to sub-optimal solutions, and becoming stuck in loops. Such limitations can be problematic in environments that require active exploration to gather information and make decisions. Sampling methods such as temperature scaling introduce token-level randomness but fail to produce enough diversity at the sequence level. We analyze LLM exploration in the classic Multi-Armed Bandit (MAB) setting and the Text Adventure Learning Environment Suite (TALES). We find that current decoding strategies and prompting methods like Chain-of-Thought and Tree-of-Thought are insufficient for robust exploration. To address this, we introduce DORA Explorer (Diversity-Oriented Ranking of Actions), a training-free framework for improving exploration in LLM agents. DORA generates diverse action candidates, scores them using token log-probabilities, and selects actions using a tunable exploration parameter. DORA achieves UCB-competitive performance on MAB and consistent gains across TALES, e.g., improving Qwen2.5-7B's performance from 29.2% to 45.5% in TextWorld. Our project is available at: https://dora-explore.github.io/.