Curia-2: Scaling Self-Supervised Learning for Radiology Foundation Models
arXiv cs.CV / 4/3/2026
💬 OpinionSignals & Early TrendsModels & Research
Key Points
- The paper introduces Curia-2, an improved self-supervised pre-training strategy for radiology foundation models that targets better representation quality for complex CT/MRI volumes.
- It enables scaling Vision Transformer architectures up to billion-parameter sizes for multi-modal CT and MRI models, positioning this as a notable step forward for radiology foundation modeling.
- The authors extend CuriaBench into separate 2D and 3D evaluation tracks to better match slice-based versus volumetric model benchmarking.
- Results indicate Curia-2 outperforms prior foundation models on vision-only tasks and performs competitively with vision-language models on clinically complex objectives such as detection.
- The work states that model weights will be made publicly available to support additional research.
Related Articles

Black Hat Asia
AI Business

90000 Tech Workers Got Fired This Year and Everyone Is Blaming AI but Thats Not the Whole Story
Dev.to

Microsoft’s $10 Billion Japan Bet Shows the Next AI Battleground Is National Infrastructure
Dev.to

TII Releases Falcon Perception: A 0.6B-Parameter Early-Fusion Transformer for Open-Vocabulary Grounding and Segmentation from Natural Language Prompts
MarkTechPost

Portable eye scanner powered by AI expands access to low-cost community screening
Reddit r/artificial