AI Navigate

Finite Difference Flow Optimization for RL Post-Training of Text-to-Image Models

arXiv cs.CV / 3/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces an online reinforcement learning variant for post-training optimization of diffusion-based text-to-image models that reduces update variance by sampling paired trajectories and biasing flow velocity toward more favorable images.
  • Unlike prior methods that treat each sampling step as a separate action, their approach views the entire sampling process as a single action, aiming for more stable training.
  • They evaluate on high-quality vision-language models and use off-the-shelf quality metrics as rewards, reporting faster convergence and improved image quality and prompt alignment.
  • Results suggest the method outperforms previous approaches in both convergence speed and output quality, indicating a promising direction for RL-based post-training of diffusion models.

Abstract

Reinforcement learning (RL) has become a standard technique for post-training diffusion-based image synthesis models, as it enables learning from reward signals to explicitly improve desirable aspects such as image quality and prompt alignment. In this paper, we propose an online RL variant that reduces the variance in the model updates by sampling paired trajectories and pulling the flow velocity in the direction of the more favorable image. Unlike existing methods that treat each sampling step as a separate policy action, we consider the entire sampling process as a single action. We experiment with both high-quality vision language models and off-the-shelf quality metrics for rewards, and evaluate the outputs using a broad set of metrics. Our method converges faster and yields higher output quality and prompt alignment than previous approaches.