Discrete Meanflow Training Curriculum

arXiv cs.LG / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a “Discrete Meanflow” (DMF) training curriculum that exploits a discretization of the Meanflow objective to gain a consistency property for more stable training.
  • Starting from a pretrained flow model, DMF reduces the compute and data requirements for Meanflow training while targeting strong one-step image generation performance.
  • The method achieves one-step FID of 3.36 on CIFAR-10 in just 2000 epochs, contrasting with prior Meanflow approaches that required extremely large training budgets.
  • The authors suggest that DMF-like faster curriculums, especially when fine-tuning from existing flow models, could enable efficient training for future one-step Meanflow-based generative models.

Abstract

Flow-based image generative models exhibit stable training and produce high quality samples when using multi-step sampling procedures. One-step generative models can produce high quality image samples but can be difficult to optimize as they often exhibit unstable training dynamics. Meanflow models exhibit excellent few-step sampling performance and tantalizing one-step sampling performance. Notably, MeanFlow models that achieve this have required extremely large training budgets. We significantly decrease the amount of computation and data budget it takes to train Meanflow models by noting and exploiting a particular discretization of the Meanflow objective that yields a consistency property which we formulate into a ``Discrete Meanflow'' (DMF) Training Curriculum. Initialized with a pretrained Flow Model, DMF curriculum reaches one-step FID 3.36 on CIFAR-10 in only 2000 epochs. We anticipate that faster training curriculums of Meanflow models, specifically those fine-tuned from existing Flow Models, drives efficient training methods of future one-step examples.