Detecting Clinical Discrepancies in Health Coaching Agents: A Dual-Stream Memory and Reconciliation Architecture

arXiv cs.LG / 5/1/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper addresses a safety problem for LLM health-coaching agents that use persistent memory: patient self-reports can be biased or outdated, while EHR data is authoritative but often stale.
  • It proposes a Dual-Stream Memory Architecture that keeps the patient narrative separate from the structured clinical record (FHIR), and uses a dedicated Reconciliation Engine to compare and classify discrepancies.
  • The Reconciliation Engine evaluates extracted memories against the patient’s FHIR profile and labels gaps by discrepancy type, severity, and which FHIR resources are involved.
  • Experiments on 26 patients across 675 longitudinal wellness-coaching sessions show the engine detected 84.4% of designed clinical discrepancies, with 86.7% safety-critical recall.
  • The authors quantify a 13.6% error cascade and find it largely stems from clinical details lost during memory extraction from unstructured conversation rather than from later classification steps.

Abstract

As Large Language Model (LLM) agents transition from single-session tools to persistent systems managing longitudinal healthcare journeys, their memory architectures face a critical challenge: reconciling two imperfect sources of truth. The patient's evolving self-report is current but prone to recall bias, while the Electronic Health Record (EHR) is medically validated but frequently stale. General-purpose agent memory systems optimize for coherence by overwriting older facts with the user's latest statement, a pattern that risks safety failures when applied to clinical data. We introduce a Dual-Stream Memory Architecture that strictly separates the patient narrative from the structured clinical record (FHIR), governed by a dedicated Reconciliation Engine that evaluates every extracted memory against the patient's FHIR profile and classifies discrepancies by type, severity, and the specific FHIR resources involved. We evaluate this architecture on 26 patients across 675 longitudinal wellness coaching sessions, using a hybrid dataset that interleaves real provider-patient transcripts with synthetic, FHIR-grounded clinical scenarios. In isolated testing, the engine detects 84.4% of designed clinical discrepancies with 86.7% safety-critical recall. By coupling extraction and reconciliation evaluation on the same data, we directly quantify a 13.6% error cascade, tracing the degradation to clinical details lost during memory extraction from unstructured conversation rather than to downstream classification errors. These findings establish that validating patient-reported memories against clinical records is both feasible and necessary for safe deployment of longitudinal health agents.