Safe Continual Reinforcement Learning in Non-stationary Environments

arXiv cs.LG / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a gap in reinforcement learning by studying how to combine safety guarantees with continual adaptation in non-stationary environments where dynamics can change unexpectedly.
  • It introduces three benchmark environments designed to test safety-critical continual adaptation and evaluates representative methods spanning safe RL, continual RL, and hybrid approaches.
  • The authors find a core trade-off: approaches typically cannot simultaneously maintain safety constraints and prevent catastrophic forgetting under non-stationary dynamics.
  • They analyze regularization-based strategies that partially relieve this tension and assess their strengths and limitations.
  • The study concludes with open challenges and future directions for building safe, resilient learning-based controllers for long-term autonomous operation.

Abstract

Reinforcement learning (RL) offers a compelling data-driven paradigm for synthesizing controllers for complex systems when accurate physical models are unavailable; however, most existing control-oriented RL methods assume stationarity and, therefore, struggle in real-world non-stationary deployments where system dynamics and operating conditions can change unexpectedly. Moreover, RL controllers acting in physical environments must satisfy safety constraints throughout their learning and execution phases, rendering transient violations during adaptation unacceptable. Although continual RL and safe RL have each addressed non-stationarity and safety, respectively, their intersection remains comparatively unexplored, motivating the study of safe continual RL algorithms that can adapt over the system's lifetime while preserving safety. In this work, we systematically investigate safe continual reinforcement learning by introducing three benchmark environments that capture safety-critical continual adaptation and by evaluating representative approaches from safe RL, continual RL, and their combinations. Our empirical results reveal a fundamental tension between maintaining safety constraints and preventing catastrophic forgetting under non-stationary dynamics, with existing methods generally failing to achieve both objectives simultaneously. To address this shortcoming, we examine regularization-based strategies that partially mitigate this trade-off and characterize their benefits and limitations. Finally, we outline key open challenges and research directions toward developing safe, resilient learning-based controllers capable of sustained autonomous operation in changing environments.