AI Navigate

ThReadMed-QA: A Multi-Turn Medical Dialogue Benchmark from Real Patient Questions

arXiv cs.CL / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • ThReadMed-QA introduces a benchmark of 2,437 fully-answered patient-physician conversations from r/AskDocs, totaling 8,204 QA pairs across up to 9 turns.
  • The benchmark uses a physician-grounded, calibrated rubric to evaluate five state-of-the-art LLMs (GPT-5, GPT-4o, Claude Haiku, Gemini 2.5 Flash, and Llama 3.3 70B) on a stratified test subset.
  • Results show GPT-5 reaches only 41.2% fully-correct responses, with all models' accuracy deteriorating from turn 0 to turn 2 and wrong-answer rates roughly tripling by turn 3.
  • The paper introduces multi-turn failure metrics—Conversational Consistency Score (CCS) and Error Propagation Rate (EPR)—and reveals that stronger initial performers are more prone to steep declines and error propagation in longer dialogues.

Abstract

Medical question-answering benchmarks predominantly evaluate single-turn exchanges, failing to capture the iterative, clarification-seeking nature of real patient consultations. We introduce ThReadMed-QA, a benchmark of 2,437 fully-answered patient-physician conversation threads extracted from r/AskDocs, comprising 8,204 question-answer pairs across up to 9 turns. Unlike prior work relying on simulated dialogues, adversarial prompts, or exam-style questions, ThReadMed-QA captures authentic patient follow-up questions and verified physician responses, reflecting how patients naturally seek medical information online. We evaluate five state-of-the-art LLMs -- GPT-5, GPT-4o, Claude Haiku, Gemini 2.5 Flash, and Llama 3.3 70B -- on a stratified test split of 238 conversations (948 QA pairs) using a calibrated LLM-as-a-judge rubric grounded in physician ground truth. Even the strongest model, GPT-5, achieves only 41.2% fully-correct responses. All five models degrade significantly from turn 0 to turn 2 (p < 0.001), with wrong-answer rates roughly tripling by the third turn. We identify a fundamental tension between single-turn capability and multi-turn reliability: models with the strongest initial performance (GPT-5: 75.2; Claude Haiku: 72.3 out of 100) exhibit the steepest declines by turn 2 (dropping 16.2 and 25.0 points respectively), while weaker models plateau or marginally improve. We introduce two metrics to quantify multi-turn failure modes: Conversational Consistency Score (CCS) and Error Propagation Rate (EPR). CCS reveals that nearly one in three Claude Haiku conversations swings between a fully correct and a completely wrong response within the same thread. EPR shows that a single wrong turn raises the probability of a subsequent wrong turn by 1.9-6.1x across all models.