Scalable High-Recall Constraint-Satisfaction-Based Information Retrieval for Clinical Trials Matching

arXiv cs.AI / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces SatIR, a scalable clinical trial retrieval system that uses constraint-satisfaction (SMT and relational algebra) to match patient records to eligibility criteria with higher precision and interpretability than common keyword/embedding approaches.
  • It augments formal matching by using LLMs to translate informal or ambiguous clinical reasoning—such as implicit assumptions and incomplete patient information—into explicit, controllable constraints.
  • Across evaluations on 59 patients and 3,621 trials, SatIR outperforms TrialGPT on retrieval quality, retrieving 32%–72% more relevant-and-eligible trials per patient and improving recall over the union of useful trials by 22–38 points.
  • The method is reported to be fast and scalable, taking about 2.95 seconds per patient while also serving more patients with at least one useful trial.
  • The approach is positioned as both effective and explainable, leveraging medical ontologies and conceptual models while providing interpretable constraint-based justifications for matches.

Abstract

Clinical trials are central to evidence-based medicine, yet many struggle to meet enrollment targets, despite the availability of over half a million trials listed on ClinicalTrials.gov, which attracts approximately two million users monthly. Existing retrieval techniques, largely based on keyword and embedding-similarity matching between patient profiles and eligibility criteria, often struggle with low recall, low precision, and limited interpretability due to complex constraints. We propose SatIR, a scalable clinical trial retrieval method based on constraint satisfaction, enabling high-precision and interpretable matching of patients to relevant trials. Our approach uses formal methods -- Satisfiability Modulo Theories (SMT) and relational algebra -- to efficiently represent and match key constraints from clinical trials and patient records. Beyond leveraging established medical ontologies and conceptual models, we use Large Language Models (LLMs) to convert informal reasoning regarding ambiguity, implicit clinical assumptions, and incomplete patient records into explicit, precise, controllable, and interpretable formal constraints. Evaluated on 59 patients and 3,621 trials, SatIR outperforms TrialGPT on all three evaluated retrieval objectives. It retrieves 32%-72% more relevant-and-eligible trials per patient, improves recall over the union of useful trials by 22-38 points, and serves more patients with at least one useful trial. Retrieval is fast, requiring 2.95 seconds per patient over 3,621 trials. These results show that SatIR is scalable, effective, and interpretable.