A Comparative Study in Surgical AI: Datasets, Foundation Models, and Barriers to Med-AGI

arXiv cs.AI / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper compares how current surgical AI systems perform across datasets and foundation model approaches, arguing that surgical image analysis remains behind other biomedical AI benchmarks.
  • It highlights key barriers specific to surgery, including the need for multimodal integration, human interaction, and accounting for physical effects during procedures.
  • In a case study on neurosurgical tool detection, the study finds that even multi-billion-parameter Vision-Language Models and extensive training still underperform on the task.
  • Scaling experiments show diminishing returns from increasing model size and training time, implying that additional compute alone is unlikely to close performance gaps.
  • The authors conclude that persistent obstacles remain across diverse architectures, pointing to data/label availability as insufficient explanations and proposing potential solutions.

Abstract

Recent Artificial Intelligence (AI) models have matched or exceeded human experts in several benchmarks of biomedical task performance, but have lagged behind on surgical image-analysis benchmarks. Since surgery requires integrating disparate tasks -- including multimodal data integration, human interaction, and physical effects -- generally-capable AI models could be particularly attractive as a collaborative tool if performance could be improved. On the one hand, the canonical approach of scaling architecture size and training data is attractive, especially since there are millions of hours of surgical video data generated per year. On the other hand, preparing surgical data for AI training requires significantly higher levels of professional expertise, and training on that data requires expensive computational resources. These trade-offs paint an uncertain picture of whether and to-what-extent modern AI could aid surgical practice. In this paper, we explore this question through a case study of surgical tool detection using state-of-the-art AI methods available in 2026. We demonstrate that even with multi-billion parameter models and extensive training, current Vision Language Models fall short in the seemingly simple task of tool detection in neurosurgery. Additionally, we show scaling experiments indicating that increasing model size and training time only leads to diminishing improvements in relevant performance metrics. Thus, our experiments suggest that current models could still face significant obstacles in surgical use cases. Moreover, some obstacles cannot be simply ``scaled away'' with additional compute and persist across diverse model architectures, raising the question of whether data and label availability are the only limiting factors. We discuss the main contributors to these constraints and advance potential solutions.