MixFlow: Mixed Source Distributions Improve Rectified Flows

arXiv cs.CV / 4/13/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • This paper argues that rectified flows (and diffusion-style generative models) suffer from slow iterative sampling due to highly curved learned generative paths, which previous work attributes to misalignment between the source distribution and the data distribution.
  • It proposes breaking the standard Gaussian source assumption by introducing a general conditioned source framework ("κ-FC") that aligns an arbitrary signal κ with the data distribution.
  • It introduces MixFlow, a training strategy that mixes (linearly combines) samples from a fixed unconditional distribution and a κ-FC-based conditioned distribution to reduce path curvature.
  • The authors report improved sampling efficiency and generation quality, including an average 12% improvement in FID over standard rectified flow and 7% over prior baselines under the same sampling budget.
  • The work includes released code via a public GitHub repository to enable replication and further experimentation.

Abstract

Diffusion models and their variations, such as rectified flows, generate diverse and high-quality images, but they are still hindered by slow iterative sampling caused by the highly curved generative paths they learn. An important cause of high curvature, as shown by previous work, is independence between the source distribution (standard Gaussian) and the data distribution. In this work, we tackle this limitation by two complementary contributions. First, we attempt to break away from the standard Gaussian assumption by introducing \kappa\texttt{-FC}, a general formulation that conditions the source distribution on an arbitrary signal \kappa that aligns it better with the data distribution. Then, we present MixFlow, a simple but effective training strategy that reduces the generative path curvatures and considerably improves sampling efficiency. MixFlow trains a flow model on linear mixtures of a fixed unconditional distribution and a \kappa\texttt{-FC}-based distribution. This simple mixture improves the alignment between the source and data, provides better generation quality with less required sampling steps, and accelerates the training convergence considerably. On average, our training procedure improves the generation quality by 12\% in FID compared to standard rectified flow and 7\% compared to previous baselines under a fixed sampling budget. Code available at: \href{https://github.com/NazirNayal8/MixFlow}{https://github.com/NazirNayal8/MixFlow}