AI Navigate

Ergodicity in reinforcement learning

arXiv cs.LG / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that non-ergodic reward processes render the standard RL objective (averaging rewards over many trajectories) uninformative for deployment on a single, long trajectory.
  • It connects non-ergodicity in reinforcement learning to ergodic Markov chain concepts and provides an instructive example to illustrate the issue.
  • It surveys existing approaches that optimize long-term performance of individual trajectories under non-ergodic reward dynamics.
  • The work discusses implications for designing RL objectives and evaluation methods in real-world, long-running deployment contexts.

Abstract

In reinforcement learning, we typically aim to optimize the expected value of the sum of rewards an agent collects over a trajectory. However, if the process generating these rewards is non-ergodic, the expected value, i.e., the average over infinitely many trajectories with a given policy, is uninformative for the average over a single, but infinitely long trajectory. Thus, if we care about how the individual agent performs during deployment, the expected value is not a good optimization objective. In this paper, we discuss the impact of non-ergodic reward processes on reinforcement learning agents through an instructive example, relate the notion of ergodic reward processes to more widely used notions of ergodic Markov chains, and present existing solutions that optimize long-term performance of individual trajectories under non-ergodic reward dynamics.