When to Retrieve During Reasoning: Adaptive Retrieval for Large Reasoning Models

arXiv cs.AI / 4/30/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper argues that existing RAG pipelines are misaligned with large reasoning models because they typically retrieve only before reasoning, while these models need evidence injected during multi-step inference.
  • It introduces ReaLM-Retrieve, which uses a step-level uncertainty detector, a learned retrieval intervention policy, and an efficiency-optimized integration mechanism to decide when to retrieve and how to do it efficiently.
  • Experiments on MuSiQue, HotpotQA, and 2WikiMultiHopQA show an average +10.1 absolute improvement in answer F1 over standard RAG, alongside a 47% reduction in retrieval calls versus fixed-interval methods.
  • On MuSiQue (2–4 hop reasoning), ReaLM-Retrieve reaches 71.2% F1 with only 1.8 retrieval calls per question on average, and it also boosts retrieval quality (81.3% Recall@5) with better precision and MRR than fixed strategies.
  • The authors claim this establishes new state-of-the-art efficiency–accuracy trade-offs for retrieval tasks that are intensive in multi-step reasoning.

Abstract

Large reasoning models such as DeepSeek-R1 and OpenAI o1 generate extended chains of thought spanning thousands of tokens, yet their integration with retrieval-augmented generation (RAG) remains fundamentally misaligned. Current RAG systems optimize for providing context before reasoning begins, while reasoning models require evidence injection during multi-step inference chains. We introduce ReaLM-Retrieve, a reasoning-aware retrieval framework that addresses this mismatch through three key innovations: (1) a step-level uncertainty detector that identifies knowledge gaps at reasoning-step granularity rather than token or sentence level; (2) a retrieval intervention policy that learns when external evidence maximally benefits ongoing reasoning; and (3) an efficiency-optimized integration mechanism that reduces per-retrieval overhead by 3.2x compared to naive integration. Experiments on MuSiQue, HotpotQA, and 2WikiMultiHopQA demonstrate that ReaLM-Retrieve achieves on average 10.1% absolute improvement in answer F1 over standard RAG (range: 9.0-11.8% across the three benchmarks) while reducing retrieval calls by 47% compared to fixed-interval approaches like IRCoT (all improvements significant at p<0.01, paired bootstrap). On the challenging MuSiQue benchmark requiring 2-4 hop reasoning, our method achieves 71.2% F1 with an average of only 1.8 retrieval calls per question. Analysis shows that ReaLM-Retrieve also improves retrieval quality itself, achieving 81.3% Recall@5 with consistently higher precision and MRR than fixed-interval baselines on supporting evidence, establishing new state-of-the-art efficiency-accuracy trade-offs for reasoning-intensive retrieval tasks.