A Reproducibility Study of LLM-Based Query Reformulation

arXiv cs.CL / 5/1/2026

💬 OpinionSignals & Early TrendsModels & Research

Key Points

  • The study systematically evaluates 10 LLM-based query reformulation methods under a single, tightly controlled experimental setup to identify what gains are truly reproducible.
  • Results show that reformulation effectiveness depends heavily on the retrieval paradigm, with improvements under lexical retrieval not reliably carrying over to neural retrievers.
  • The researchers find that using larger LLMs does not consistently lead to better downstream retrieval performance across settings.
  • Experiments span two LLM families and parameter scales, three retrieval types (lexical, learned sparse, dense), and nine benchmarks across TREC Deep Learning and BEIR.
  • To support transparency and ongoing comparison, the authors release prompts, configurations, evaluation scripts, and runs via QueryGym along with a public leaderboard.

Abstract

Large Language Models (LLMs) are now widely used for query reformulation and expansion in Information Retrieval, with many studies reporting substantial effectiveness gains. However, these results are typically obtained under heterogeneous experimental conditions, making it difficult to assess which findings are reproducible and which depend on specific implementation choices. In this work, we present a systematic reproducibility and comparative study of ten representative LLM-based query reformulation methods under a unified and strictly controlled experimental framework. We evaluate methods across two architectural LLM families at two parameter scales, three retrieval paradigms (lexical, learned sparse, and dense), and nine benchmark datasets spanning TREC Deep Learning and BEIR. Our results show that reformulation gains are strongly conditioned on the retrieval paradigm, that improvements observed under lexical retrieval do not consistently transfer to neural retrievers, and that larger LLMs do not uniformly yield better downstream performance. These findings clarify the stability and limits of reported gains in prior work. To enable transparent replication and ongoing comparison, we release all prompts, configurations, evaluation scripts, and run files through QueryGym, an open-source reformulation toolkit with a public leaderboard.\footnote{https://leaderboard.querygym.com}