Safe-Support Q-Learning: Learning without Unsafe Exploration

arXiv cs.LG / 4/29/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a stricter safe reinforcement learning requirement that forbids visiting unsafe states during training, not just penalizing them or constraining them indirectly.
  • It introduces a Q-learning-based safe RL framework that uses a behavior policy restricted to a safe set, assuming learned trajectories stay within that safe region.
  • The method employs a two-stage training strategy: it first trains the Q-function with a KL-regularized Bellman target to keep Q-values close to the behavior policy, then derives and extracts a policy from the trained Q-function.
  • The proposed parametric policy extraction aims to approximate an optimal policy while maintaining safety, and the framework is designed to be adaptable across different action spaces and behavior-policy types.
  • Experiments report stable learning, well-calibrated value estimates, and safer behavior with comparable or improved performance versus existing baselines.

Abstract

Ensuring safety during reinforcement learning (RL) training is critical in real-world applications where unsafe exploration can lead to devastating outcomes. While most safe RL methods mitigate risk through constraints or penalization, they still allow exploration of unsafe states during training. In this work, we adopt a stricter safety requirement that eliminates unsafe state visitation during training. To achieve this goal, we propose a Q-learning-based safe RL framework that leverages a behavior policy supported on a safe set. Under the assumption that the induced trajectories remain within the safe set, this policy enables sufficient exploration within the safe region without requiring near-optimality. We adopt a two-stage framework in which the Q-function and policy are trained separately. Specifically, we introduce a KL-regularized Bellman target that constrains the Q-function to remain close to the behavior policy. We then derive the policy induced from the trained Q-values and propose a parametric policy extraction method to approximate the optimal policy. Our approach provides a unified framework that can be adapted to different action spaces and types of behavior policies. Experimental results demonstrate that the proposed method achieves stable learning and well-calibrated value estimates and yields safer behavior with comparable or better performance than existing baselines.