SiMing-Bench: Evaluating Procedural Correctness from Continuous Interactions in Clinical Skill Videos

arXiv cs.CL / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces SiMing-Bench, a benchmark designed to evaluate whether multimodal LLMs can maintain procedural correctness by tracking how continuous interactions update the underlying procedural state in full-length clinical skill videos.
  • SiMing-Bench is built around SiMing-Score, which contains physician-annotated clinical exam videos (CPR, AED operation, bag-mask ventilation) with standardized step-wise rubrics and dual-expert labels.
  • Results across a range of open- and closed-source MLLMs show consistently weak agreement with physician judgments, indicating limited capability for interaction-driven, state-dependent procedural evaluation.
  • The study finds that even when overall procedure-level correlation looks acceptable, models often still fail on rubric-defined intermediate steps, implying that global scoring can hide weaknesses in true procedural judgment.
  • Additional analyses suggest the key bottleneck is not just fine-grained scoring or temporal localization, but the modeling of procedural state updates over time from ongoing interaction cues.

Abstract

Current video benchmarks for multimodal large language models (MLLMs) focus on event recognition, temporal ordering, and long-context recall, but overlook a harder capability required for expert procedural judgment: tracking how ongoing interactions update the procedural state and thereby determine the correctness of later actions. We introduce SiMing-Bench, the first benchmark for evaluating this capability from full-length clinical skill videos. It targets rubric-grounded process-level judgment of whether interaction-driven state updates preserve procedural correctness across an entire workflow. SiMing-Bench is instantiated with SiMing-Score, a physician-annotated dataset of real clinical skill examination videos spanning cardiopulmonary resuscitation, automated external defibrillator operation, and bag-mask ventilation, each paired with a standardized step-wise rubric and dual-expert labels. Across diverse open- and closed-source MLLMs, we observe consistently weak agreement with physician judgments. Moreover, weak performance on rubric-defined intermediate steps persists even when overall procedure-level correlation appears acceptable, suggesting that coarse global assessment substantially overestimates current models' procedural judgment ability. Additional analyses with binary step judgment and step-aligned clips indicate that the bottleneck is not merely fine-grained scoring or temporal localization, but modeling how continuous interactions update procedural state over time.