AI Navigate

PROMO: Promptable Outfitting for Efficient High-Fidelity Virtual Try-On

arXiv cs.CV / 3/13/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • PROMO is a promptable virtual try-on framework built on a Flow Matching DiT backbone with latent multi-modal conditioning to support high-fidelity VTON results including subject preservation, texture transfer, and harmonization.
  • It leverages conditioning efficiency and self-reference mechanisms to substantially reduce inference overhead compared with prior VTON methods.
  • On standard benchmarks, PROMO surpasses prior VTON methods and general image-editing models in visual fidelity while maintaining a competitive balance between quality and speed.
  • The training framework is generic and transferable to broader image-editing tasks, with VTON-paired data providing rich supervision for training general-purpose editors.
  • The work highlights that flow-matching transformers with latent conditioning and self-reference acceleration offer an effective, training-efficient solution for high-quality virtual try-on with potential impact on online retail.

Abstract

Virtual Try-on (VTON) has become a core capability for online retail, where realistic try-on results provide reliable fit guidance, reduce returns, and benefit both consumers and merchants. Diffusion-based VTON methods achieve photorealistic synthesis, yet often rely on intricate architectures such as auxiliary reference networks and suffer from slow sampling, making the trade-off between fidelity and efficiency a persistent challenge. We approach VTON as a structured image editing problem that demands strong conditional generation under three key requirements: subject preservation, faithful texture transfer, and seamless harmonization. Under this perspective, our training framework is generic and transfers to broader image editing tasks. Moreover, the paired data produced by VTON constitutes a rich supervisory resource for training general-purpose editors. We present PROMO, a promptable virtual try-on framework built upon a Flow Matching DiT backbone with latent multi-modal conditional concatenation. By leveraging conditioning efficiency and self-reference mechanisms, our approach substantially reduces inference overhead. On standard benchmarks, PROMO surpasses both prior VTON methods and general image editing models in visual fidelity while delivering a competitive balance between quality and speed. These results demonstrate that flow-matching transformers, coupled with latent multi-modal conditioning and self-reference acceleration, offer an effective and training-efficient solution for high-quality virtual try-on.