Rethinking Health Agents: From Siloed AI to Collaborative Decision Mediators

arXiv cs.AI / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current LLM-based health agents often run in siloed ways, failing to support the multi-stakeholder relationships (patients, caregivers, clinicians) that are central to healthcare decisions.
  • Using a clinically validated fictional pediatric chronic kidney disease case, it shows that adherence breakdowns can be driven by fragmented situational awareness and misaligned goals across stakeholders.
  • It reframes AI from a standalone assistant into an AI collaborator embedded in multi-party care interactions, aiming to reduce misalignment and fragmentation.
  • The authors propose a design framework for AI collaborators that surfaces contextual information, reconciles differing mental models, and scaffolds shared understanding while keeping human decision authority intact.

Abstract

Large language model based health agents are increasingly used by health consumers and clinicians to interpret health information and guide health decisions. However, most AI systems in healthcare operate in siloed configurations, supporting individual users rather than the multi-stakeholder relationships central to healthcare. Such use can fragment understanding and exacerbate misalignment among patients, caregivers, and clinicians. We reframe AI not as a standalone assistant, but as a collaborator embedded within multi-party care interactions. Through a clinically validated fictional pediatric chronic kidney disease case study, we show that breakdowns in adherence stem from fragmented situational awareness and misaligned goals, and that siloed use of general-purpose AI tools does little to address these collaboration gaps. We propose a conceptual framework for designing AI collaborators that surface contextual information, reconcile mental models, and scaffold shared understanding while preserving human decision authority.