Benchmarking CNN-based Models against Transformer-based Models for Abdominal Multi-Organ Segmentation on the RATIC Dataset
arXiv cs.CV / 3/20/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The study benchmarks three hybrid transformer-based models (UNETR, SwinUNETR, UNETR++) against a CNN baseline (SegResNet) for volumetric multi-organ segmentation on the RATIC dataset, comprising 206 CT scans from 23 institutions across five abdominal organs.
- Under identical preprocessing and training conditions, the CNN-based SegResNet achieves the highest overall Dice Similarity Coefficient, outperforming all transformer-based models on all organs.
- Among transformer approaches, UNETR++ is the most competitive, while UNETR demonstrates faster convergence with fewer training iterations.
- The findings imply that for small- to medium-sized heterogeneous datasets, well-optimized CNN architectures can remain highly competitive and may surpass hybrid transformer designs.
Related Articles
How AI is Transforming Dynamics 365 Business Central
Dev.to
Algorithmic Gaslighting: A Formal Legal Template to Fight AI Safety Pivots That Cause Psychological Harm
Reddit r/artificial
Do I need different approaches for different types of business information errors?
Dev.to
ShieldCortex: What We Learned Protecting AI Agent Memory
Dev.to
How AI-Powered Revenue Intelligence Transforms B2B Sales Teams
Dev.to