AI Navigate

Stochastic Resetting Accelerates Policy Convergence in Reinforcement Learning

arXiv cs.LG / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The authors study stochastic resetting in reinforcement learning and show it accelerates policy convergence in both tabular and neural-network-based tasks.
  • In tabular grid environments, resetting speeds up convergence even when it does not reduce the search time of a purely diffusive agent, indicating a mechanism beyond classical first-passage optimization.
  • In continuous control with neural-network value approximation, random resetting improves deep RL when exploration is difficult and rewards are sparse, by truncating long, uninformative trajectories to enhance value propagation while preserving the optimal policy.
  • The work presents stochastic resetting as a simple, tunable optimization principle that translates a statistical mechanics concept into practical guidance for accelerating learning in RL.

Abstract

Stochastic resetting, where a dynamical process is intermittently returned to a fixed reference state, has emerged as a powerful mechanism for optimizing first-passage properties. Existing theory largely treats static, non-learning processes. Here we ask how stochastic resetting interacts with reinforcement learning, where the underlying dynamics adapt through experience. In tabular grid environments, we find that resetting accelerates policy convergence even when it does not reduce the search time of a purely diffusive agent, indicating a novel mechanism beyond classical first-passage optimization. In a continuous control task with neural-network-based value approximation, we show that random resetting improves deep reinforcement learning when exploration is difficult and rewards are sparse. Unlike temporal discounting, resetting preserves the optimal policy while accelerating convergence by truncating long, uninformative trajectories to enhance value propagation. Our results establish stochastic resetting as a simple, tunable mechanism for accelerating learning, translating a canonical phenomenon of statistical mechanics into an optimization principle for reinforcement learning.