Learning from Disagreement: Clinician Overrides as Implicit Preference Signals for Clinical AI in Value-Based Care

arXiv cs.LG / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper reframes clinician overrides of clinical AI recommendations as implicit preference signals, extending the idea behind RLHF to settings where expert decisions have real downstream consequences.
  • It introduces a formal preference-learning framework that models overrides using patient state, organizational context, and clinician capability, decomposed into execution and alignment competencies.
  • The authors propose a dual learning architecture that jointly trains a reward model and a capability model via alternating optimization to reduce a failure mode called “suppression bias.”
  • They argue that outcome-based payment and chronic disease management generate override data with unusually strong properties (longitudinal density, focused decision space, outcome labels, and capability variation) that support learning rewards aligned to patient trajectories.
  • The framework is reported to have originated from operational work in a live value-based care deployment aimed at improving clinician capability.

Abstract

We reframe clinician overrides of clinical AI recommendations as implicit preference data - the same signal structure exploited by reinforcement learning from human feedback (RLHF), but richer: the annotator is a domain expert, the alternatives carry real consequences, and downstream outcomes are observable. We present a formal framework extending standard preference learning with three contributions: a five-category override taxonomy mapping override types to distinct model update targets; a preference formulation conditioned on patient state s, organizational context c, and clinician capability kappa, where kappa decomposes into execution capability kappa-exec and alignment capability kappa-align; and a dual learning architecture that jointly trains a reward model and a capability model via alternating optimization, preventing a failure mode we term suppression bias-the systematic suppression of correct-but-difficult recommendations when clinician capability falls below the execution threshold. We argue that chronic disease management under outcome-based payment contracts produces override data with uniquely favorable properties-longitudinal density, concentrated decision space, outcome labels, and natural capability variation-and that training environments combining longitudinal outcome measurement with aligned financial incentives are a necessary condition for learning a reward model aligned with patient trajectory rather than with encounter economics. This framework emerged from operational work to improve clinician capability in a live value-based care deployment.