From Black Box to Glass Box: Cross-Model ASR Disagreement to Prioto Review in Ambient AI Scribe Documentation

arXiv cs.AI / 4/17/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The study evaluates whether cross-model disagreement among multiple heterogeneous ASR systems can serve as a reference-free uncertainty signal for prioritizing human review in ambient AI medical scribe workflows.
  • Using 50 publicly available medical education audio clips (8h 14m), the authors transcribed them with eight different ASR systems, aligned outputs, and constructed consensus pseudo-references to measure token-level agreement.
  • Inter-model reliability was low (ICC[2,1] = 0.131), suggesting the systems fail in diverse ways, which makes their disagreement potentially informative.
  • Most evaluated tokens had near-unanimous agreement (72.1%), while 2.5% fell into high-risk bands (0–3 models), with high-risk mass varying widely across accent groups and disagreements skewing toward content errors in the riskiest regions.
  • The results indicate cross-model disagreement can localize likely unreliable transcript spans without human reference transcripts, though clinical accuracy of the flagged segments still needs validation.

Abstract

Ambient AI "scribe" systems promise to reduce clinical documentation burden, but automatic speech recognition (ASR) errors can remain unnoticed without careful review, and high-quality human reference transcripts are often unavailable for calibrating uncertainty. We investigate whether cross-model disagreement among heterogeneous ASR systems can act as a reference-free uncertainty signal to prioritize human verification in medical transcription workflows. Using 50 publicly available medical education audio clips (8 h 14 min), we transcribed each clip with eight ASR systems spanning commercial APIs and open-source engines. We aligned multi-model outputs, built consensus pseudo-references, and quantified token-level agreement using a majority-strength metric; we further characterized disagreements by type (content vs. punctuation/formatting) and assessed per-model agreement via leave-one-model-out (jackknife) consensus scoring. Inter-model reliability was low (ICC[2,1] = 0.131), indicating heterogeneous failure modes across systems. Across 76,398 evaluated token positions, 72.1% showed near-unanimous agreement (7-8 models), while 2.5% fell into high-risk bands (0-3 models), with high-risk mass varying from 0.7% to 11.4% across accent groups. Low-agreement regions were enriched for content disagreements, with the content fraction increasing from 53.9% to 73.9% across quintiles of high-risk mass. These results suggest that cross-model disagreement provides a sparse, localizable signal that can surface potentially unreliable transcript spans without human-verified references, enabling targeted review; clinical accuracy of flagged regions remains to be established.