DARLING: Detection Augmented Reinforcement Learning with Non-Stationary Guarantees

arXiv cs.LG / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies model-free reinforcement learning in piecewise-stationary episodic finite-horizon MDPs where both rewards and transitions may change multiple times without the agent knowing when.
  • It introduces DARLING, a modular “detection-augmented” wrapper that can be applied to both tabular and linear MDP settings without requiring prior information about change points.
  • The authors provide improved dynamic regret guarantees for DARLING under specific change-point separation and reachability assumptions, and they validate the approach empirically.
  • They also prove the first minimax lower bounds for piecewise-stationary RL in tabular and linear MDPs, supporting that DARLING is nearly optimal.
  • Experiments on standard benchmarks show DARLING outperforming existing state-of-the-art methods across a range of non-stationary scenarios.

Abstract

We study model-free reinforcement learning (RL) in non-stationary finite-horizon episodic Markov decision processes (MDPs) without prior knowledge of the non-stationarity. We focus on the piecewise-stationary (PS) setting, where both the reward and transition dynamics can change an arbitrary number of times. We propose Detection Augmented Reinforcement Learning (DARLING), a modular wrapper for PS-RL that applies to both tabular and linear MDPs, without knowledge of the changes. Under certain change-point separation and reachability conditions, DARLING improves the best available dynamic regret bounds in both settings and yields strong empirical performance. We further establish the first minimax lower bounds for PS-RL in tabular and linear MDPs, showing that DARLING is the first nearly optimal algorithm. Experiments on standard benchmarks demonstrate that DARLING consistently surpasses the state-of-the-art methods across diverse non-stationary scenarios.