Delayed Homomorphic Reinforcement Learning for Environments with Delayed Feedback

arXiv cs.LG / 4/7/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies reinforcement learning in environments with delayed feedback, showing that delays violate the Markov assumption and hinder both learning and control.
  • It argues that prior state-augmentation methods are limited because they either reduce the burden on the critic only or treat actor/critic in inconsistent ways, while also suffering from state-space explosion and high sample complexity.
  • The authors propose Delayed Homomorphic Reinforcement Learning (DHRL), based on MDP homomorphisms, to collapse belief-equivalent augmented states into an abstract MDP.
  • The framework is designed to preserve optimality while providing theoretical state-space compression bounds and sample-complexity analysis.
  • Experiments on MuJoCo continuous-control benchmarks indicate that the practical DHRL algorithm outperforms strong augmentation-based baselines, especially when delays are long.

Abstract

Reinforcement learning in real-world systems is often accompanied by delayed feedback, which breaks the Markov assumption and impedes both learning and control. Canonical state augmentation approaches cause the state-space explosion, which introduces a severe sample-complexity burden. Despite recent progress, the state-of-the-art augmentation-based baselines remain incomplete: they either predominantly reduce the burden on the critic or adopt non-unified treatments for the actor and critic. To provide a structured and sample-efficient solution, we propose delayed homomorphic reinforcement learning (DHRL), a framework grounded in MDP homomorphisms that collapses belief-equivalent augmented states and enables efficient policy learning on the resulting abstract MDP without loss of optimality. We provide theoretical analyses of state-space compression bounds and sample complexity, and introduce a practical algorithm. Experiments on continuous control tasks in MuJoCo benchmark confirm that our algorithm outperforms strong augmentation-based baselines, particularly under long delays.