On the "Causality" Step in Policy Gradient Derivations: A Pedagogical Reconciliation of Full Return and Reward-to-Go

arXiv cs.AI / 4/7/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes the commonly cited “causality” step in policy gradient derivations, clarifying precisely why terms from the past portion of the trajectory disappear when moving from full return to reward-to-go.
  • It provides an explicit mathematical derivation using prefix trajectory distributions and the score-function identity, rather than relying on a less rigorous heuristic explanation.
  • The authors show that using reward-to-go does not alter the resulting REINFORCE-style estimator; the difference is purely in how the objective decomposition reveals the same estimator form.
  • Conceptually, reward-to-go is derived directly from decomposing the learning objective over trajectory prefixes, with the standard causality argument emerging only as a corollary.
  • Overall, the contribution is pedagogical: it improves rigor and intuition in introductory derivations of policy gradients without changing the underlying algorithm.

Abstract

In introductory presentations of policy gradients, one often derives the REINFORCE estimator using the full trajectory return and then states, by ``causality,'' that the full return may be replaced by the reward-to-go. Although this statement is correct, it is frequently presented at a level of rigor that leaves unclear where the past-reward terms disappear. This short paper isolates that step and gives a mathematically explicit derivation based on prefix trajectory distributions and the score-function identity. The resulting account does not change the estimator. Its contribution is conceptual: instead of presenting reward-to-go as a post hoc unbiased replacement for full return, it shows that reward-to-go arises directly once the objective is decomposed over prefix trajectories. In this formulation, the usual causality argument is recovered as a corollary of the derivation rather than as an additional heuristic principle.