CT-to-X-ray Distillation Under Tiny Paired Cohorts: An Evidence-Bounded Reproducible Pilot Study
arXiv cs.CV / 4/1/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The study investigates whether CT images can be used only as training-time supervision to distill a binary disease/no-disease chest X-ray classifier, eliminating the need for CT at inference.
- Using patient-level paired data and a teacher–student distillation setup, the authors find that a stripped-down plain cross-modal logit-KD baseline outperforms the more complex JDCNet variant on a small four-image validation subset.
- Eight Monte Carlo patient-level resampling experiments suggest results are sensitive to dataset splits, with late fusion achieving the best mean accuracy while different strategies perform best for macro-F1 and balanced accuracy.
- Stronger mechanism controls (attention transfer and feature hints) do not reliably restore a robust cross-modality advantage, highlighting likely failure modes in the cross-modality transfer.
- The paper’s main contribution is presented as a reproducible, evidence-bounded pilot protocol that clarifies the task definition, instability in rankings, and minimum requirements for future credible CT-to-X-ray claims rather than a new validated architecture.
Related Articles

Show HN: 1-Bit Bonsai, the First Commercially Viable 1-Bit LLMs
Dev.to

I Built an AI Agent That Can Write Its Own Tools When It Gets Stuck
Dev.to

Agent Self-Discovery: How AI Agents Find Their Own Wallets
Dev.to
[P] Federated Adversarial Learning
Reddit r/MachineLearning

The Inversion Error: Why Safe AGI Requires an Enactive Floor and State-Space Reversibility
Towards Data Science