FlowRL: A Taxonomy and Modular Framework for Reinforcement Learning with Diffusion Policies

arXiv cs.LG / 3/31/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper proposes “FlowRL,” a taxonomy that unifies reinforcement learning (RL) methods that use diffusion and flow-based policy representations, addressing the lack of an overarching framework in the field.
  • It introduces a modular, JAX-based open-source codebase designed for reproducibility and rapid prototyping, using JIT compilation to enable high-throughput training.
  • The authors provide standardized, systematic benchmarks across Gym-Locomotion, the DeepMind Control Suite, and IsaacLab to enable rigorous side-by-side comparisons of diffusion-based approaches.
  • The work offers practical guidance for selecting appropriate diffusion/flow RL algorithms based on the target robotics application and establishes a foundation for future algorithm design in generative-model-driven robotics.

Abstract

Thanks to their remarkable flexibility, diffusion models and flow models have emerged as promising candidates for policy representation. However, efficient reinforcement learning (RL) upon these policies remains a challenge due to the lack of explicit log-probabilities for vanilla policy gradient estimators. While numerous attempts have been proposed to address this, the field lacks a unified perspective to reconcile these seemingly disparate methods, thus hampering ongoing development. In this paper, we bridge this gap by introducing a comprehensive taxonomy for RL algorithms with diffusion/flow policies. To support reproducibility and agile prototyping, we introduce a modular, JAX-based open-source codebase that leverages JIT-compilation for high-throughput training. Finally, we provide systematic and standardized benchmarks across Gym-Locomotion, DeepMind Control Suite, and IsaacLab, offering a rigorous side-by-side comparison of diffusion-based methods and guidance for practitioners to choose proper algorithms based on the application. Our work establishes a clear foundation for understanding and algorithm design, a high-efficiency toolkit for future research in the field, and an algorithmic guideline for practitioners in generative models and robotics. Our code is available at https://github.com/typoverflow/flow-rl.