Truncated Rectified Flow Policy for Reinforcement Learning with One-Step Sampling

arXiv cs.LG / 4/13/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses limitations of standard maximum-entropy RL with unimodal Gaussian policies by introducing a more expressive generative policy family for modeling multimodal action distributions.
  • It proposes Truncated Rectified Flow Policy (TRFP), a hybrid deterministic-stochastic design intended to make entropy-regularized optimization tractable for continuous-time flow-based policies.
  • TRFP improves training stability and reduces inference cost by using gradient truncation and “flow straightening” to enable effective one-step sampling.
  • Experiments on a toy multigoal setup and 10 MuJoCo benchmarks show TRFP learns and reproduces multimodal behavior and achieves better performance than strong baselines under standard sampling.
  • The method also remains highly competitive when restricted to one-step sampling, addressing a key practical drawback of multi-step generative sampling in RL.

Abstract

Maximum entropy reinforcement learning (MaxEnt RL) has become a standard framework for sequential decision making, yet its standard Gaussian policy parameterization is inherently unimodal, limiting its ability to model complex multimodal action distributions. This limitation has motivated increasing interest in generative policies based on diffusion and flow matching as more expressive alternatives. However, incorporating such policies into MaxEnt RL is challenging for two main reasons: the likelihood and entropy of continuous-time generative policies are generally intractable, and multi-step sampling introduces both long-horizon backpropagation instability and substantial inference latency. To address these challenges, we propose Truncated Rectified Flow Policy (TRFP), a framework built on a hybrid deterministic-stochastic architecture. This design makes entropy-regularized optimization tractable while supporting stable training and effective one-step sampling through gradient truncation and flow straightening. Empirical results on a toy multigoal environment and 10 MuJoCo benchmarks show that TRFP captures multimodal behavior effectively, outperforms strong baselines on most benchmarks under standard sampling, and remains highly competitive under one-step sampling.