Sum-of-Checks: Structured Reasoning for Surgical Safety with Large Vision-Language Models

arXiv cs.LG / 4/27/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper introduces “Sum-of-Checks,” a framework that breaks the Critical View of Safety (CVS) criteria in laparoscopic cholecystectomy into expert-defined, clinically grounded visual verification checks.
  • For each endoscopic frame, large vision-language models (LVLMs) perform binary judgments with justifications for every check, and the framework computes criterion-level scores via fixed, weighted aggregation.
  • On the Endoscapes2023 benchmark using three frontier LVLMs, Sum-of-Checks improves average frame-level mean average precision by 12–14% versus the strongest baselines, including direct prompting, chain-of-thought, and sub-question decomposition.
  • The authors find LVLMs are more reliable on observational checks (e.g., visibility and tool obstruction) but vary significantly on decision-critical anatomical evidence, highlighting where structured reasoning helps most.
  • The study concludes that separating evidence elicitation from decision-making increases both accuracy and auditability for safety-critical surgical AI systems.

Abstract

Purpose: Accurate assessment of the Critical View of Safety (CVS) during laparoscopic cholecystectomy is essential to prevent bile duct injury, a complication associated with significant morbidity and mortality. While large vision-language models (LVLMs) offer flexible reasoning, their predictions remain difficult to audit and unreliable on safety-critical surgical tasks. Methods: We introduce Sum-of-Checks, a framework that decomposes each CVS criterion into expert-defined reasoning checks reflecting clinically relevant visual evidence. Given a laparoscopic frame, an LVLM evaluates each check, producing a binary judgment and justification. Criterion-level scores are computed via fixed, weighted aggregation of check outcomes. We evaluate on the Endoscapes2023 benchmark using three frontier LVLMs, comparing against direct prompting, chain-of-thought, and sub-question decomposition, each with and without few-shot examples. Results: Sum-of-Checks improves average frame-level mean average precision by 12--14% relative to the best baseline across all three models and criteria. Analysis of individual checks reveals that LVLMs are reliable on observational checks (e.g., visibility, tool obstruction) but show substantial variability on decision-critical anatomical evidence. Conclusion: Structuring surgical reasoning into expert-aligned verification checks improves both accuracy and transparency of LVLM-based CVS assessment, demonstrating that explicitly separating evidence elicitation from decision-making is critical for reliable and auditable surgical AI systems. Code is available at https://github.com/BrachioLab/SumOfChecks.