Agentic clinical reasoning over longitudinal myeloma records: a retrospective evaluation against expert consensus

arXiv cs.AI / 4/28/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The study evaluates whether an agentic LLM system can perform longitudinal clinical reasoning for multiple myeloma decisions using large, heterogeneous patient records, and compares it with single-pass RAG, iterative RAG, and full-context input.
  • On 469 question pairs spanning 48 templates and three complexity levels (with labels from oncologists and senior adjudication), the agentic system achieved 79.6% concordance, outperforming all baselines while iterative RAG and full-context approaches plateaued around 75.4–75.8%.
  • Improvements were larger for harder, criteria-based synthesis questions and for longer patient record histories, with the best gains observed for the longest records (top decile).
  • Although the overall system error rate (12.2%) was similar to expert disagreement (13.6%), system errors were more clinically significant than expert disagreements, implying the need for prospective evaluation in routine care.
  • External validation included MIMIC-IV, but the authors emphasize that prospective studies are required to confirm patient benefit before clinical deployment.

Abstract

Multiple myeloma is managed through sequential lines of therapy over years to decades, with each decision depending on cumulative disease history distributed across dozens to hundreds of heterogeneous clinical documents. Whether LLM-based systems can synthesise this evidence at a level approaching expert agreement has not been established. A retrospective evaluation was conducted on longitudinal clinical records of 811 myeloma patients treated at a tertiary centre (2001-2026), covering 44,962 documents and 1,334,677 laboratory values, with external validation on MIMIC-IV. An agentic reasoning system was compared against single-pass retrieval-augmented generation (RAG), iterative RAG, and full-context input on 469 patient-question pairs from 48 templates at three complexity levels. Reference labels came from double annotation by four oncologists with senior haematologist adjudication. Iterative RAG and full-context input converged on a shared ceiling (75.4% vs 75.8%, p = 1.00). The agentic system reached 79.6% concordance (95% CI 76.4-82.8), exceeding both baselines (+3.8 and +4.2 pp; p = 0.006 and 0.007). Gains rose with question complexity, reaching +9.4 pp on criteria-based synthesis (p = 0.032), and with record length, reaching +13.5 pp in the top decile (n = 10). The system error rate (12.2%) was comparable to expert disagreement (13.6%), but severity was inverted: 57.8% of system errors were clinically significant versus 18.8% of expert disagreements. Agentic reasoning was the only approach to exceed the shared ceiling, with gains concentrated on the most complex questions and longest records. The greater clinical consequence of residual system errors indicates that prospective evaluation in routine care is required before these findings translate into patient benefit.