Benchmarking CNN- and Transformer-Based Models for Surgical Instrument Segmentation in Robotic-Assisted Surgery

arXiv cs.CV / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The study benchmarks five deep learning architectures (UNet, Attention UNet, DeepLabV3, and SegFormer variants) for multi-class semantic segmentation of surgical instruments using the SAR-RARP50 dataset.
  • Models are trained with a compound Cross Entropy + Dice loss to handle class imbalance and improve boundary delineation in real-world radical prostatectomy videos.
  • Convolutional baselines like UNet and Attention UNet perform strongly, but DeepLabV3 shows results comparable to SegFormer thanks to atrous convolution and multi-scale context aggregation.
  • Transformer-based SegFormer models provide better global contextual understanding, improving generalization across different instrument appearances and surgical conditions.
  • The paper offers practical guidance on model selection for surgical AI, emphasizing trade-offs between convolutional local feature processing and transformer-based global context modeling.

Abstract

Accurate segmentation of surgical instruments in robotic-assisted surgery is critical for enabling context-aware computer-assisted interventions, such as tool tracking, workflow analysis, and autonomous decision-making. In this study, we benchmark five deep learning architectures-UNet, UNet, DeepLabV3, Attention UNet, and SegFormer on the SAR-RARP50 dataset for multi-class semantic segmentation of surgical instruments in real-world radical prostatectomy videos. The models are trained with a compound loss function combining Cross Entropy and Dice loss to address class imbalance and capture fine object boundaries. Our experiments reveal that while convolutional models such as UNet and Attention UNet provide strong baseline performance, DeepLabV3 achieves results comparable to SegFormer, demonstrating the effectiveness of atrous convolution and multi-scale context aggregation in capturing complex surgical scenes. Transformer-based architectures like SegFormer further enhance global contextual understanding, leading to improved generalization across varying instrument appearances and surgical conditions. This work provides a comprehensive comparison and practical insights for selecting segmentation models in surgical AI applications, highlighting the trade-offs between convolutional and transformer-based approaches.