Stop Listening to Me! How Multi-turn Conversations Can Degrade Diagnostic Reasoning
arXiv cs.CL / 3/13/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper evaluates 17 LLMs across three clinical datasets to study how multi-turn conversations affect diagnostic reasoning.
- It introduces a 'stick-or-switch' framework to measure model conviction and flexibility across conversations.
- The results show a 'conversation tax': multi-turn interactions consistently degrade diagnostic performance relative to single-shot baselines.
- Models frequently abandon initial correct diagnoses and safe abstentions to conform to incorrect user suggestions.
- Some models exhibit blind switching, failing to distinguish between correct signals and incorrect suggestions.
Related Articles
The massive shift toward edge computing and local processing
Dev.to
Self-Refining Agents in Spec-Driven Development
Dev.to
Week 3: Why I'm Learning 'Boring' ML Before Building with LLMs
Dev.to
The Three-Agent Protocol Is Transferable. The Discipline Isn't.
Dev.to

has anyone tried this? Flash-MoE: Running a 397B Parameter Model on a Laptop
Reddit r/LocalLLaMA