Lightweight Retrieval-Augmented Generation and Large Language Model-Based Modeling for Scalable Patient-Trial Matching

arXiv cs.AI / 4/27/2026

💬 OpinionTools & Practical UsageModels & Research

Key Points

  • The paper addresses patient-trial matching over long, heterogeneous EHR data and complex eligibility criteria, focusing on improving scalability, generalization, and computational efficiency.
  • It introduces a lightweight pipeline that splits the task into retrieval-augmented generation for selecting clinically relevant EHR segments and LLM-based modeling to encode those segments into representations.
  • The method further improves efficiency by refining representations with dimensionality reduction and using lightweight predictors for downstream classification.
  • Experiments on multiple public benchmarks and a real-world Mayo Clinic multimodal dataset show that retrieval-based information selection reduces compute burden while preserving clinically meaningful signals.
  • The authors find that frozen LLMs work well for structured clinical data representations, while fine-tuning is needed to model unstructured clinical narratives, and overall performance matches end-to-end LLM approaches at much lower cost.

Abstract

Patient-trial matching requires reasoning over long, heterogeneous electronic health records (EHRs) and complex eligibility criteria, posing significant challenges for scalability, generalization, and computational efficiency. Existing approaches either rely on full-document processing with large language models (LLMs), which is computationally expensive, or use traditional machine learning methods that struggle to capture unstructured clinical narratives. In this work, we propose a lightweight framework that combines retrieval-augmented generation and large language model-based modeling for scalable patient-trial matching. The framework explicitly separates two key components: retrieval-augmented generation is used to identify clinically relevant segments from long EHRs, reducing input complexity, while large language models are used to encode these selected segments into informative representations. These representations are further refined through dimensionality reduction and modeled using lightweight predictors, enabling efficient and scalable downstream classification. We evaluate the proposed approach on multiple public benchmarks (n2c2, SIGIR, TREC 2021/2022) and a real-world multimodal dataset from Mayo Clinic (MCPMD). Results show that retrieval-based information selection significantly reduces computational burden while preserving clinically meaningful signals. We further demonstrate that frozen LLMs provide strong representations for structured clinical data, whereas fine-tuning is essential for modeling unstructured clinical narratives. Importantly, the proposed lightweight pipeline achieves performance comparable to end-to-end LLM approaches with substantially lower computational cost.