Policy Gradient Primal-Dual Method for Safe Reinforcement Learning from Human Feedback

arXiv cs.LG / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper frames Safe RLHF as an infinite-horizon discounted constrained Markov decision process (CMDP) to reflect that humans may provide feedback during ongoing, continuing interactions rather than a single finite episode.
  • It proposes two new Safe RLHF algorithms that avoid reward-model fitting and instead work directly under the CMDP formulation while allowing variable (flexible) trajectory lengths during training.
  • The methods use a primal-dual optimization approach and provide global convergence guarantees, rather than relying only on empirical validation.
  • The convergence results are characterized with polynomial rates in terms of policy-gradient iterations, trajectory sample lengths, and the number of human preference queries.
  • The authors claim this is the first study of infinite-horizon discounted CMDP settings under human feedback with global, non-asymptotic convergence guarantees.

Abstract

Safe Reinforcement Learning from Human Feedback (Safe RLHF) has recently achieved empirical success in developing helpful and harmless large language models by decoupling human preferences regarding helpfulness and harmlessness. Existing approaches typically rely on fitting fixed horizon reward models from human feedback and have only been validated empirically. In this paper, we formulate safe RLHF as an infinite horizon discounted Con- strained Markov Decision Process (CMDP), since humans may interact with the model over a continuing sequence of interactions rather than within a single finite episode. We propose two Safe RLHF algorithms that do not require reward model fitting and, in contrast to prior work assuming fixed-length trajectories, support flexible trajectory lengths for training. Both algo- rithms are based on the primal-dual method and achieve global convergence guarantees with polynomial rates in terms of policy gradient iterations, trajectory sample lengths, and human preference queries. To the best of our knowledge, this is the first work to study infinite horizon discounted CMDP under human feedback and establish global, non-asymptotic convergence.