PubMed Reasoner: Dynamic Reasoning-based Retrieval for Evidence-Grounded Biomedical Question Answering

arXiv cs.CL / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • PubMed Reasoner is introduced as an evidence-grounded biomedical QA agent that improves answer trustworthiness by iteratively refining queries and citing verifiable sources.
  • The system uses three stages: a self-critic query refinement step that evaluates and improves MeSH-term coverage via partial (metadata) retrieval, a reflective retrieval loop that gathers articles in batches, and an evidence-grounded response generator with explicit citations.
  • Experiments with a GPT-4o backbone report 78.32% accuracy on PubMedQA (slightly above human experts) and consistent improvements on MMLU Clinical Knowledge.
  • LLM-as-judge evaluations favor PubMed Reasoner outputs for reasoning soundness, evidence grounding, clinical relevance, and overall trustworthiness, while the authors note compute/token cost control.
  • The proposed approach aims to address limitations of prior retrieval-augmented and self-reflection methods by refining queries mid-stream and only switching to full answer generation once sufficient evidence is collected.

Abstract

Trustworthy biomedical question answering (QA) systems must not only provide accurate answers but also justify them with current, verifiable evidence. Retrieval-augmented approaches partially address this gap but lack mechanisms to iteratively refine poor queries, whereas self-reflection methods kick in only after full retrieval is completed. In this context, we introduce PubMed Reasoner, a biomedical QA agent composed of three stages: self-critic query refinement evaluates MeSH terms for coverage, alignment, and redundancy to enhance PubMed queries based on partial (metadata) retrieval; reflective retrieval processes articles in batches until sufficient evidence is gathered; and evidence-grounded response generation produces answers with explicit citations. PubMed Reasoner with a GPT-4o backbone achieves 78.32% accuracy on PubMedQA, slightly surpassing human experts, and showing consistent gains on MMLU Clinical Knowledge. Moreover, LLM-as-judge evaluations prefer our responses across: reasoning soundness, evidence grounding, clinical relevance, and trustworthiness. By orchestrating retrieval-first reasoning over authoritative sources, our approach provides practical assistance to clinicians and biomedical researchers while controlling compute and token costs.