Intentional Updates for Streaming Reinforcement Learning

arXiv cs.LG / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Gradient-based learning with a fixed step size in parameter space can cause unpredictable function-level changes, which is especially destabilizing in streaming reinforcement learning (batch size = 1).
  • The paper introduces “intentional updates,” where the practitioner first specifies the desired effect on the function output (e.g., a fixed fractional reduction of TD error or a bounded change in the policy) and then computes the step size to achieve it approximately.
  • It adapts the idea behind Normalized Least Mean Squares (online linear regression) to deep RL by defining intended outcomes for Intentional TD and Intentional Policy Gradient, including limiting local KL divergence.
  • Practical algorithms are proposed that combine eligibility traces with diagonal scaling, and experiments show state-of-the-art streaming results that often match or rival batch and replay-buffer methods.
  • Overall, the approach targets the core instability arising from un-averaged stochasticity in streaming updates by making per-step behavior more controlled and predictable.

Abstract

In gradient-based learning, a step size chosen in parameter units does not produce a predictable per-step change in function output. This often leads to instability in the streaming setting (i.e., batch size=1), where stochasticity is not averaged out and update magnitudes can momentarily become arbitrarily big or small. Instead, we propose intentional updates: first specify the intended outcome of an update and then solve for the step size that approximately achieves it. This strategy has precedent in online supervised linear regression via Normalized Least Mean Squares algorithm, which selects a step size to yield a specified change in the function output proportional to the current error. We extend this principle to streaming deep reinforcement learning by defining appropriate intended outcomes: Intentional TD aims for a fixed fractional reduction of the TD error, and Intentional Policy Gradient aims for a bounded per-step change in the policy, limiting local KL divergence. We propose practical algorithms combining eligibility traces and diagonal scaling. Empirically, these methods yield state-of-the-art streaming performance, frequently performing on par with batch and replay-buffer approaches.