Offline Constrained RLHF with Multiple Preference Oracles

arXiv cs.LG / 4/2/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies offline constrained reinforcement learning from human feedback (RLHF) using multiple preference oracles to balance overall utility with safety/fairness constraints for a protected group.
  • It estimates oracle-specific rewards from pairwise comparisons collected under a reference policy using maximum likelihood, and analyzes how statistical uncertainty affects the resulting dual optimization.
  • The constrained problem is reformulated as a KL-regularized Lagrangian with a Gibbs-policy primal solution, turning the learning task into a convex dual problem.
  • The authors introduce a dual-only algorithm that provides high-probability guarantees for constraint satisfaction and gives finite-sample performance bounds for offline constrained preference learning.
  • The theoretical framework is extended to handle multiple constraints and more general f-divergence regularization beyond KL.

Abstract

We study offline constrained reinforcement learning from human feedback with multiple preference oracles. Motivated by applications that trade off performance with safety or fairness, we aim to maximize target population utility subject to a minimum protected group welfare constraint. From pairwise comparisons collected under a reference policy, we estimate oracle-specific rewards via maximum likelihood and analyze how statistical uncertainty propagates through the dual program. We cast the constrained objective as a KL-regularized Lagrangian whose primal optimizer is a Gibbs policy, reducing learning to a convex dual problem. We propose a dual-only algorithm that ensures high-probability constraint satisfaction and provide the first finite-sample performance guarantees for offline constrained preference learning. Finally, we extend our theoretical analysis to accommodate multiple constraints and general f-divergence regularization.