Extending Differential Temporal Difference Methods for Episodic Problems

arXiv cs.AI / 5/7/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • Differential temporal difference (TD) methods for infinite-horizon RL use reward centering to keep returns bounded and remove state-independent value offsets.
  • The paper shows that reward centering can change the optimal policy in episodic settings, motivating a targeted generalization.
  • It proposes a generalized differential TD method for episodic problems and proves it preserves the ordering of policies even with termination.
  • The work establishes an equivalence to a form of linear TD, inheriting existing theoretical guarantees, and derives differential versions of several streaming RL algorithms.
  • Experiments across multiple base algorithms and environments indicate that reward centering can improve sample efficiency for episodic problems.

Abstract

Differential temporal difference (TD) methods are value-based reinforcement learning algorithms that have been proposed for infinite-horizon problems. They rely on reward centering, where each reward is centered by the average reward. This keeps the return bounded and removes a value function's state-independent offset. However, reward centering can alter the optimal policy in episodic problems, limiting its applicability. Motivated by recent works that emphasize the role of normalization in streaming deep reinforcement learning, we study reward centering in episodic problems and propose a generalization of differential TD. We prove that this generalization maintains the ordering of policies in the presence of termination, and thus extends differential TD to episodic problems. We show equivalence with a form of linear TD, thereby inheriting theoretical guarantees that have been shown for those algorithms. We then extend several streaming reinforcement learning algorithms to their differential counterparts. Across a range of base algorithms and environments, we empirically validate that reward centering can improve sample efficiency in episodic problems.