LeapAlign: Post-Training Flow Matching Models at Any Generation Step by Building Two-Step Trajectories

arXiv cs.CV / 4/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes LeapAlign, a fine-tuning approach for flow matching models that aligns them with human preferences via reward-gradient backpropagation through the generation process.
  • Direct backpropagation over long ODE trajectories is shown to be impractical due to high memory usage and gradient explosion, limiting updates to early generation steps.
  • LeapAlign reduces the long trajectory to a two-step “leap” design, where each leap skips multiple ODE sampling steps and predicts future latents in one shot.
  • By randomizing the leap start/end timesteps and reweighting training trajectories based on consistency with the long generation path (while dampening large-magnitude gradient terms), LeapAlign enables stable and efficient updates at any generation step.
  • Experiments fine-tuning the Flux model demonstrate that LeapAlign outperforms existing GRPO-based and direct-gradient methods on image quality and image-text alignment metrics.

Abstract

This paper focuses on the alignment of flow matching models with human preferences. A promising way is fine-tuning by directly backpropagating reward gradients through the differentiable generation process of flow matching. However, backpropagating through long trajectories results in prohibitive memory costs and gradient explosion. Therefore, direct-gradient methods struggle to update early generation steps, which are crucial for determining the global structure of the final image. To address this issue, we introduce LeapAlign, a fine-tuning method that reduces computational cost and enables direct gradient propagation from reward to early generation steps. Specifically, we shorten the long trajectory into only two steps by designing two consecutive leaps, each skipping multiple ODE sampling steps and predicting future latents in a single step. By randomizing the start and end timesteps of the leaps, LeapAlign leads to efficient and stable model updates at any generation step. To better use such shortened trajectories, we assign higher training weights to those that are more consistent with the long generation path. To further enhance gradient stability, we reduce the weights of gradient terms with large magnitude, instead of completely removing them as done in previous works. When fine-tuning the Flux model, LeapAlign consistently outperforms state-of-the-art GRPO-based and direct-gradient methods across various metrics, achieving superior image quality and image-text alignment.