CardioDiT: Latent Diffusion Transformers for 4D Cardiac MRI Synthesis

arXiv cs.CV / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • CardioDiT is proposed to synthesize cine cardiac MRI as a unified 4D (space+time) latent diffusion problem, addressing limitations of prior approaches that factorize space and time or rely on temporal-consistency tricks like masks.
  • The method uses a spatiotemporal VQ-VAE to encode 2D+t slices into compact latent representations, then a diffusion transformer jointly models the resulting latents as full 3D+t volumes to couple spatial and temporal generation end-to-end.
  • Experiments on public datasets and a larger private cohort show improvements in inter-slice consistency, temporal coherence of cardiac motion, and realistic distributions of cardiac function.
  • The authors compare CardioDiT against baselines with increasing degrees of spatiotemporal coupling to argue that explicit 4D modeling via diffusion transformers yields a more principled foundation for 4D cardiac image synthesis.
  • Code and models trained on public data are released via the provided GitHub repository to support reproducibility and further research.

Abstract

Latent diffusion models (LDMs) have recently achieved strong performance in 3D medical image synthesis. However, modalities like cine cardiac MRI (CMR), representing a temporally synchronized 3D volume across the cardiac cycle, add an additional dimension that most generative approaches do not model directly. Instead, they factorize space and time or enforce temporal consistency through auxiliary mechanisms such as anatomical masks. Such strategies introduce structural biases that may limit global context integration and lead to subtle spatiotemporal discontinuities or physiologically inconsistent cardiac dynamics. We investigate whether a unified 4D generative model can learn continuous cardiac dynamics without architectural factorization. We propose CardioDiT, a fully 4D latent diffusion framework for short-axis cine CMR synthesis based on diffusion transformers. A spatiotemporal VQ-VAE encodes 2D+t slices into compact latents, which a diffusion transformer then models jointly as complete 3D+t volumes, coupling space and time throughout the generative process. We evaluate CardioDiT on public CMR datasets and a larger private cohort, comparing it to baselines with progressively stronger spatiotemporal coupling. Results show improved inter-slice consistency, temporally coherent motion, and realistic cardiac function distributions, suggesting that explicit 4D modeling with a diffusion transformer provides a principled foundation for spatiotemporal cardiac image synthesis. Code and models trained on public data are available at https://github.com/Cardio-AI/cardiodit.