AI Navigate

Hypothesis-Conditioned Query Rewriting for Decision-Useful Retrieval

arXiv cs.CL / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • HCQR is a training-free pre-retrieval framework that reorients Retrieval-Augmented Generation (RAG) from topic-oriented retrieval to evidence-oriented retrieval by deriving a lightweight working hypothesis from the input question and candidate options.
  • It rewrites retrieval into three targeted queries to seek evidence that (1) supports the hypothesis, (2) distinguishes it from competing alternatives, and (3) verifies salient clues in the question.
  • Experiments on MedQA and MMLU-Med show HCQR consistently outperforms single-query RAG and re-rank/filter baselines, improving average accuracy by 5.9 and 3.6 points, respectively.
  • Code is available at https://anonymous.4open.science/r/HCQR-1C2E.

Abstract

Retrieval-Augmented Generation (RAG) improves Large Language Models (LLMs) by grounding generation in external, non-parametric knowledge. However, when a task requires choosing among competing options, simply grounding generation in broadly relevant context is often insufficient to drive the final decision. Existing RAG methods typically rely on a single initial query, which often favors topical relevance over decision-relevant evidence, and therefore retrieves background information that can fail to discriminate among answer options. To address this issue, here we propose Hypothesis-Conditioned Query Rewriting (HCQR), a training-free pre-retrieval framework that reorients RAG from topic-oriented retrieval to evidence-oriented retrieval. HCQR first derives a lightweight working hypothesis from the input question and candidate options, and then rewrites retrieval into three targeted queries that seek evidence to: (1) support the hypothesis, (2) distinguish it from competing alternatives, and (3) verify salient clues in the question. This approach enables context retrieval that is more directly aligned with answer selection, allowing the generator to confirm or overturn the initial hypothesis based on the retrieved evidence. Experiments on MedQA and MMLU-Med show that HCQR consistently outperforms single-query RAG and re-rank/filter baselines, improving average accuracy over Simple RAG by 5.9 and 3.6 points, respectively. Code is available at https://anonymous.4open.science/r/HCQR-1C2E.