HealthNLP_Retrievers at ArchEHR-QA 2026: Cascaded LLM Pipeline for Grounded Clinical Question Answering

arXiv cs.CL / 4/30/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The ArchEHR-QA 2026 shared task tackles grounded clinical question answering over electronic health records (EHRs), aiming to help patients understand complex notes via patient-authored questions.
  • HealthNLP_Retrievers’ solution uses a multi-stage cascaded pipeline powered by Gemini 2.5 Pro to reformulate questions, retrieve evidence from long clinical text, and generate answers grounded only in that evidence.
  • The system includes four integrated modules: few-shot query reformulation, heuristic evidence scoring, a grounded response generator, and a high-precision alignment framework that links generated answers to the supporting clinical sentences.
  • In the competition tracks, the team achieved strong and differentiated performance, ranking 1st for question interpretation, 5th for answer generation, 7th for evidence identification, and 9th for answer-evidence alignment.
  • The authors conclude that structuring LLMs into a pipeline improves grounding, precision, and the professional quality of patient-facing health communication, and they provide public source code for reproducibility.

Abstract

Patient portals now give individuals direct access to their electronic health records (EHRs), yet access alone does not ensure patients understand or act on the complex clinical information contained in these records. The ArchEHR-QA 2026 shared task addresses this challenge by focusing on grounded question answering over EHRs, and this paper presents the system developed by the HealthNLP_Retrievers team for this task. The proposed approach uses a multi-stage cascaded pipeline powered by the Gemini 2.5 Pro large language model to interpret patient-authored questions and retrieve relevant evidence from lengthy clinical notes. Our architecture comprises four integrated modules: (1) a few-shot query reformulation unit which summarizes verbose patient queries; (2) a heuristic-based evidence scorer which ranks clinical sentences to prioritize recall; (3) a grounded response generator which synthesizes professional-caliber answers restricted strictly to identified evidence; and (4) a high-precision many-to-many alignment framework which links generated answers to supporting clinical sentences. This cascaded approach achieved competitive results. Across the individual tracks, the system ranked 1st in question interpretation, 5th in answer generation, 7th in evidence identification, and 9th in answer-evidence alignment. These results show that integrating large language models within a structured multi-stage pipeline improves grounding, precision, and the professional quality of patient-oriented health communication. To support reproducibility, our source code is publicly available in our GitHub repository