Safe Reinforcement Learning with Preference-based Constraint Inference

arXiv cs.LG / 2026/3/26

💬 オピニオンSignals & Early TrendsIdeas & Deep AnalysisModels & Research

要点

  • The paper studies safe reinforcement learning where real-world safety constraints are complex and hard to explicitly specify, arguing that prior constraint-inference methods rely on unrealistic assumptions or heavy expert demonstrations.
  • It shows that preference-based constraint inference using popular Bradley-Terry (BT) models can misrepresent safety costs by failing to capture asymmetric, heavy-tailed cost behavior, which may lead to risk underestimation and weaker downstream policy learning.
  • The authors propose Preference-based Constrained Reinforcement Learning (PbCRL), adding a “dead zone” mechanism to preference modeling (with theoretical motivation) to promote heavy-tailed cost distributions and improve constraint alignment.
  • PbCRL also introduces an SNR loss to drive exploration based on cost variance and uses a two-stage training strategy to reduce online labeling burden while adaptively improving constraint satisfaction.
  • Experiments indicate PbCRL outperforms state-of-the-art baselines on both safety (constraint alignment) and reward, positioning the approach as a promising route for constraint inference in safety-critical applications.

Abstract

Safe reinforcement learning (RL) is a standard paradigm for safety-critical decision making. However, real-world safety constraints can be complex, subjective, and even hard to explicitly specify. Existing works on constraint inference rely on restrictive assumptions or extensive expert demonstrations, which is not realistic in many real-world applications. How to cheaply and reliably learn these constraints is the major challenge we focus on in this study. While inferring constraints from human preferences offers a data-efficient alternative, we identify the popular Bradley-Terry (BT) models fail to capture the asymmetric, heavy-tailed nature of safety costs, resulting in risk underestimation. It is still rare in the literature to understand the impacts of BT models on the downstream policy learning. To address the above knowledge gaps, we propose a novel approach namely Preference-based Constrained Reinforcement Learning (PbCRL). We introduce a novel dead zone mechanism into preference modeling and theoretically prove that it encourages heavy-tailed cost distributions, thereby achieving better constraint alignment. Additionally, we incorporate a Signal-to-Noise Ratio (SNR) loss to encourage exploration by cost variances, which is found to benefit policy learning. Further, two-stage training strategy are deployed to lower online labeling burdens while adaptively enhancing constraint satisfaction. Empirical results demonstrate that PbCRL achieves superior alignment with true safety requirements and outperforms the state-of-the-art baselines in terms of safety and reward. Our work explores a promising and effective way for constraint inference in Safe RL, which has great potential in a range of safety-critical applications.