MedConceal: A Benchmark for Clinical Hidden-Concern Reasoning Under Partial Observability

arXiv cs.CL / 4/13/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • MedConceal is introduced as a new benchmark for evaluating clinical dialogue systems that must reason under partial observability, where patients’ hidden fears or barriers are not disclosed unless elicited skillfully.
  • The benchmark uses an interactive patient simulator that withholds latent concerns, tracks whether clinicians reveal and address them, and assesses process-aware turn-level communication signals in addition to end-task outcomes.
  • It includes 300 curated cases (built from clinician-answered online health discussions) and 600 clinician–LLM interaction logs, with hidden concerns derived from prior literature and organized using an expert-developed taxonomy.
  • Experiments on two key abilities—confirmation (multi-turn surfacing of concerns) and intervention (addressing the concern and guiding to a target care plan)—find no single system dominates across metrics.
  • The study reports frontier models performing best on certain confirmation measures, while human clinicians remain strongest on intervention success, highlighting hidden-concern reasoning as an open challenge for medical dialogue.

Abstract

Patient-clinician communication is an asymmetric-information problem: patients often do not disclose fears, misconceptions, or practical barriers unless clinicians elicit them skillfully. Effective medical dialogue therefore requires reasoning under partial observability: clinicians must elicit latent concerns, confirm them through interaction, and respond in ways that guide patients toward appropriate care. However, existing medical dialogue benchmarks largely sidestep this challenge by exposing hidden patient state, collapsing elicitation into extraction, or evaluating responses without modeling what remains hidden. We present MedConceal, a benchmark with an interactive patient simulator for evaluating hidden-concern reasoning in medical dialogue, comprising 300 curated cases and 600 clinician-LLM interactions. Built from clinician-answered online health discussions, each case pairing clinician-visible context with simulator-internal hidden concerns derived from prior literature and structured using an expert-developed taxonomy. The simulator withholds these concerns from the dialogue agent, tracks whether they have been revealed and addressed via theory-grounded turn-level communication signals, and is clinician-reviewed for clinical plausibility. This enables process-aware evaluation of both task success and the interaction process that leads to it. We study two abilities: confirmation, surfacing hidden concerns through multi-turn dialogue, and intervention, addressing the primary concern and guiding the patient toward a target plan. Results show that no single system dominates: frontier models lead on different confirmation metrics, while human clinicians (N=159) remain strongest on intervention success. Together, these results identify hidden-concern reasoning under partial observability as a key unresolved challenge for medical dialogue systems.