Doctorina MedBench: End-to-End Evaluation of Agent-Based Medical AI

arXiv cs.AI / 3/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • Doctorina MedBenchは、医師と患者の現実的な対話をシミュレーションすることで、エージェント型医療AIをエンドツーエンドで評価する枠組みを提案しています。
  • ベンチマークは、病歴収集から検査/所見の分析、鑑別診断、個別化された提案までの多段階臨床対話を評価対象にし、D.O.T.S.(Diagnosis/Observations&Investigations/Treatment/Step Count)で臨床的正確さと対話効率の両面を測ります。
  • モデル劣化を開発・運用の両段階で検知するための多層テスト/品質監視アーキテクチャ、セーフティ目的のトラップケース、カテゴリ別のランダムサンプリング、完全な回帰テストを備えています。
  • データセットは1,000件超の臨床ケースで750件以上の診断をカバーし、医療AIだけでなく医師評価や臨床推論スキル開発にも利用可能であるとしています。

Abstract

We present Doctorina MedBench, a comprehensive evaluation framework for agent-based medical AI based on the simulation of realistic physician-patient interactions. Unlike traditional medical benchmarks that rely on solving standardized test questions, the proposed approach models a multi-step clinical dialogue in which either a physician or an AI system must collect medical history, analyze attached materials (including laboratory reports, images, and medical documents), formulate differential diagnoses, and provide personalized recommendations. System performance is evaluated using the D.O.T.S. metric, which consists of four components: Diagnosis, Observations/Investigations, Treatment, and Step Count, enabling assessment of both clinical correctness and dialogue efficiency. The system also incorporates a multi-level testing and quality monitoring architecture designed to detect model degradation during both development and deployment. The framework supports safety-oriented trap cases, category-based random sampling of clinical scenarios, and full regression testing. The dataset currently contains more than 1,000 clinical cases covering over 750 diagnoses. The universality of the evaluation metrics allows the framework to be used not only to assess medical AI systems, but also to evaluate physicians and support the development of clinical reasoning skills. Our results suggest that simulation of clinical dialogue may provide a more realistic assessment of clinical competence compared to traditional examination-style benchmarks.