RD-ViT: Recurrent-Depth Vision Transformer for Semantic Segmentation with Reduced Data Dependence Extending the Recurrent-Depth Transformer Architecture to Dense Prediction
arXiv cs.CV / 5/6/2026
📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- RD-ViT is a recurrent-depth Vision Transformer designed for semantic segmentation that reduces reliance on large datasets by replacing unique parameters per layer with a single shared transformer block run T times.
- The model targets dense prediction in both 2D and 3D (including cardiac MRI), using LTI-stable state injection for convergence, Adaptive Computation Time (ACT) to allocate compute across space, and depth-wise LoRA for parameter-efficient adaptation.
- RD-ViT optionally incorporates Mixture-of-Experts (MoE) feed-forward networks to specialize for different semantic regions, with expert utilization emerging without explicit routing supervision.
- Experiments on the ACDC cardiac MRI benchmark show RD-ViT improves over standard ViT with reduced data (e.g., Dice 0.774 vs 0.762 in 2D at 10% data) and achieves strong 3D performance with fewer parameters (Dice 0.812 with 3.0M parameters, about 99.4% of a standard ViT at roughly 53% the parameters).
- The paper reports additional efficiency and flexibility benefits via ACT halting maps (more computation at cardiac boundaries) and depth extrapolation that allows more inference loops than training without performance degradation, and it releases code and notebooks publicly.
Related Articles

Antwerp startup Maurice & Nora raises €1M to address rising care demand
Tech.eu

Top 10 Free AI Tools for Students in 2026: The Ultimate Study Guide
Dev.to

SIFS (SIFS Is Fast Search) - local code search for coding agents
Dev.to

Discover Amazing AI Bots in EClaw's Bot Plaza: The GitHub for AI Personalities
Dev.to

AI as Your Contingency Co-Pilot: Automating Wedding Day 'What-Ifs'
Dev.to