Planning to Explore: Curiosity-Driven Planning for LLM Test Generation

arXiv cs.CL / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses how LLM-based code test generation can plateau under greedy strategies that prioritize immediate coverage gains but fail to plan for deep branches requiring preparatory setup steps.
  • It proposes a curiosity-driven planning approach, CovQValue, that models the program’s branch structure as an unknown environment and uses an evolving coverage map as a proxy posterior informed by Bayesian exploration principles.
  • CovQValue integrates the coverage map back into the LLM, generates multiple candidate action plans in parallel, and selects plans using LLM-estimated Q-values to balance short-term discovery with longer-term reachability.
  • Experiments on TestGenEval Lite show 51–77% higher branch coverage versus greedy selection across three popular LLMs, and success on 77–84% of targets, with iterative generation evaluated via a new benchmark RepoExploreBench.
  • The results suggest that sequential, curiosity-driven exploration can more effectively discover program behavior than coverage-only heuristics for LLM-guided test generation.

Abstract

The use of LLMs for code generation has naturally extended to code testing and evaluation. As codebases grow in size and complexity, so does the need for automated test generation. Current approaches for LLM-based test generation rely on strategies that maximize immediate coverage gain, a greedy approach that plateaus on code where reaching deep branches requires setup steps that individually yield zero new coverage. Drawing on principles of Bayesian exploration, we treat the program's branch structure as an unknown environment, and an evolving coverage map as a proxy probabilistic posterior representing what the LLM has discovered so far. Our method, CovQValue, feeds the coverage map back to the LLM, generates diverse candidate plans in parallel, and selects the most informative plan by LLM-estimated Q-values, seeking actions that balance immediate branch discovery with future reachability. Our method outperforms greedy selection on TestGenEval Lite, achieving 51-77% higher branch coverage across three popular LLMs and winning on 77-84% of targets. In addition, we build a benchmark for iterative test generation, RepoExploreBench, where they achieve 40-74%. These results show the potential of curiosity-driven planning methods for LLM-based exploration, enabling more effective discovery of program behavior through sequential interaction