OmniDiT: Extending Diffusion Transformer to Omni-VTON Framework
arXiv cs.CV / 3/23/2026
📰 NewsModels & Research
Key Points
- OmniDiT is a diffusion-transformer based framework that unifies virtual try-on (VTON) and try-off (VTOFF) tasks into a single model.
- The authors introduce the Omni-TryOn dataset with over 380k garment-model-try-on image pairs and detailed text prompts, built through a self-evolving data curation pipeline.
- They propose architectural innovations, including token concatenation, adaptive position encoding, and Shifted Window Attention to achieve linear complexity in the diffusion model, along with multiple timestep prediction and an alignment loss to boost fidelity.
- Experiments show state-of-the-art performance for model-free VTON and VTOFF, with performance comparable to current SOTA methods in model-based VTON.
Related Articles

Interactive Web Visualization of GPT-2
Reddit r/artificial
[R] Causal self-attention as a probabilistic model over embeddings
Reddit r/MachineLearning
The 5 software development trends that actually matter in 2026 (and what they mean for your startup)
Dev.to
iPhone 17 Pro Running a 400B LLM: What It Really Means
Dev.to
[R] V-JEPA 2 has no pixel decoder, so how do you inspect what it learned? We attached a VQ probe to the frozen encoder and found statistically significant physical structure
Reddit r/artificial