Reinforcement Learning via Value Gradient Flow

arXiv cs.LG / 4/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies behavior-regularized reinforcement learning (RL), emphasizing how regularizing toward a reference distribution helps avoid value over-optimization from out-of-distribution extrapolation.
  • It proposes Value Gradient Flow (VGF), a scalable framework that formulates behavior-regularized RL as an optimal transport problem from the reference distribution to a value-induced optimal policy distribution.
  • VGF solves the transport using a discrete gradient flow approach, where value gradients steer particles initialized from the reference distribution.
  • The authors argue that VGF provides regularization implicitly by limiting the “transport budget,” and it avoids explicit policy parameterization while staying expressive and adaptable.
  • Experiments show VGF achieves state-of-the-art performance on offline RL benchmarks (D4RL, OGBench) and on RL tasks involving LLMs, with code available online.

Abstract

We study behavior-regularized reinforcement learning (RL), where regularization toward a reference distribution (the dataset in offline RL or the base model in LLM RL finetuning) is essential to prevent value over-optimization caused by erroneous out-of-distribution extrapolation. Existing methods either rely on reparameterized policy gradient, which are difficult to scale to large generative models, or on reject sampling, which can be overly conservative when attempting to move beyond the behavior support. In this paper, we propose Value Gradient Flow (VGF), a scalable new paradigm for behavior-regularized RL. VGF casts behavior-regularized RL as an optimal transport problem that maps the reference distribution to the value-induced optimal policy distribution. We solve this transport problem via discrete gradient flow, where value gradients guide particles initialized from the reference distribution. Our analysis shows that VGF imposes regularization implicitly by controlling the transport budget. VGF eliminates explicit policy parameterization while remaining expressive and flexible, this enables adaptive test-time scaling by adjusting the transport budget. Extensive experiments demonstrate that VGF significantly outperforms prior methods, achieving state-of-the-art results on offline RL benchmarks (D4RL, OGBench) and LLM RL tasks. Code and runs can be found at https://ryanxhr.github.io/vgf.