CLIP Architecture for Abdominal CT Image-Text Alignment and Zero-Shot Learning: Investigating Batch Composition and Data Scaling

arXiv cs.CV / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies vision-language contrastive training for 3D abdominal CT and radiology report alignment, reproducing the Merlin dual-encoder approach with symmetric InfoNCE loss and improved zero-shot performance (macro F1 74.45% vs 73.00%).
  • It finds that explicit normal/abnormal class ratio balancing within training batches (25:75, 50:50, 75:25) generally reduces performance compared with an unbalanced baseline, with 75:25 performing best among balanced settings (72.02%).
  • Data scaling ablations on a 4,362-study subset show performance increases sub-linearly as training data grows (65.26% at 20% data to 71.88% at 100%), with large variability in which findings benefit.
  • Applying enforced 50:50 balanced sampling on the smaller subset further degrades results (68.01%), suggesting class balancing can be harmful even when dataset size is fixed.
  • The authors conclude that stochastic diversity from random sampling and Merlin’s alternating batching across anatomical subsections provides better regularization than engineered class ratios under the small batch constraints common in 3D medical imaging.

Abstract

Vision-language models trained with contrastive learning on paired medical images and reports show strong zero-shot diagnostic capabilities, yet the effect of training batch composition on learned representations remains unexplored for 3D medical imaging. We reproduce Merlin, a dual-encoder model that aligns 3D abdominal CT volumes with radiology reports using symmetric InfoNCE loss, achieving a zero-shot macro F1 of 74.45% across 30 findings (original: 73.00%). We then investigate two axes of variation. First, we control the normal-to-abnormal ratio within training batches at 25:75, 50:50, and 75:25 using section-level balanced sampling on the full dataset. All three configurations underperform the unbalanced baseline by 2.4 to 2.8 points, with 75:25 achieving the best result (72.02%) among balanced variants. Second, we conduct data scaling ablations on a 4,362-study subset, training with 20%, 40%, and 100% of the data. Performance scales sub-linearly from 65.26% to 71.88%, with individual findings varying dramatically in data sensitivity. Enforcing 50:50 balanced sampling on the same subset further degrades performance to 68.01%, confirming that explicit class balancing hurts regardless of dataset or balancing granularity. Our results indicate that the stochastic diversity of random sampling, combined with Merlin's alternating batching over anatomical subsections, provides more effective regularization than engineered class ratios at the small batch sizes required by 3D medical volumes.