Learning from Disagreement: Clinician Overrides as Implicit Preference Signals for Clinical AI in Value-Based Care
arXiv cs.LG / 5/1/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper reframes clinician overrides of clinical AI recommendations as implicit preference signals, extending the idea behind RLHF to settings where expert decisions have real downstream consequences.
- It introduces a formal preference-learning framework that models overrides using patient state, organizational context, and clinician capability, decomposed into execution and alignment competencies.
- The authors propose a dual learning architecture that jointly trains a reward model and a capability model via alternating optimization to reduce a failure mode called “suppression bias.”
- They argue that outcome-based payment and chronic disease management generate override data with unusually strong properties (longitudinal density, focused decision space, outcome labels, and capability variation) that support learning rewards aligned to patient trajectories.
- The framework is reported to have originated from operational work in a live value-based care deployment aimed at improving clinician capability.
Related Articles

Why Autonomous Coding Agents Keep Failing — And What Actually Works
Dev.to

Why Enterprise AI Pilots Fail
Dev.to

The PDF Feature Nobody Asked For (That I Use Every Day)
Dev.to

How to Fix OpenClaw Tool Calling Issues
Dev.to

Mistral's new flagship Medium 3.5 folds chat, reasoning, and code into one model
THE DECODER