Bayesian policy gradient and actor-critic algorithms

arXiv cs.LG / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a Bayesian framework for policy gradient reinforcement learning that models policy-gradient estimates as a Gaussian process to reduce estimator variance and speed up convergence with fewer samples.
  • It additionally provides natural-gradient estimates and quantifies uncertainty via a gradient covariance measure with little extra computational cost.
  • The approach treats system trajectories as the core observable unit, making it applicable to partially observable settings, but it cannot exploit the Markov property even when the environment is Markovian.
  • To address the Markovian limitation, the authors introduce a Bayesian actor-critic method using Gaussian-process temporal-difference learning critics to model the action-value function and derive posterior distributions over value functions.
  • Experiments compare the proposed Bayesian policy gradient and Bayesian actor-critic algorithms against conventional Monte-Carlo policy-gradient baselines across multiple reinforcement learning tasks.

Abstract

Policy gradient methods are reinforcement learning algorithms that adapt a parameterized policy by following a performance gradient estimate. Conventional policy gradient methods use Monte-Carlo techniques to estimate the gradient, which tend to have high variance, requiring many samples and resulting in slow convergence. We first propose a Bayesian framework for policy gradient, based on modeling the policy gradient as a Gaussian process. This reduces the number of samples needed to obtain accurate gradient estimates. Moreover, estimates of the natural gradient and a measure of the uncertainty in the gradient estimates, namely, the gradient covariance, are provided at little extra cost. Since the proposed framework considers system trajectories as its basic observable unit, it does not require the dynamics within trajectories to be of any particular form, and can be extended to partially observable problems. On the downside, it cannot exploit the Markov property when the system is Markovian. To address this, we supplement our Bayesian policy gradient framework with a new actor-critic learning model in which a Bayesian class of non-parametric critics, based on Gaussian process temporal difference learning, is used. Such critics model the action-value function as a Gaussian process, allowing Bayes rule to be used to compute the posterior distribution over action-value functions, conditioned on the observed data. Appropriate choices of the policy parameterization and of the prior covariance (kernel) between action-values yield closed-form expressions for the posterior of the gradient of the expected return with respect to the policy parameters. We perform detailed experimental comparisons of the proposed Bayesian policy gradient and actor-critic algorithms with classic Monte-Carlo based policy gradient methods, on a number of reinforcement learning problems.