AI Navigate

ARROW: Augmented Replay for RObust World models

arXiv cs.LG / 3/13/2026

📰 NewsModels & Research

Key Points

  • ARROW is a model-based continual reinforcement learning algorithm that extends DreamerV3 with a memory-efficient, distribution-matching replay buffer to mitigate catastrophic forgetting.
  • It uses two complementary buffers: a short-term buffer for recent experiences and a long-term buffer that preserves task diversity through intelligent sampling.
  • Evaluation on Atari (tasks without shared structure) and Procgen CoinRun variants (tasks with shared structure) shows ARROW reduces forgetting compared to baselines with the same replay buffer size, while maintaining forward transfer.
  • The approach draws inspiration from neuroscience, where the brain replays experiences to a predictive world model rather than directly to the policy.
  • The results highlight the potential of model-based RL with bio-inspired replay for continual learning and warrant further research.

Abstract

Continual reinforcement learning challenges agents to acquire new skills while retaining previously learned ones with the goal of improving performance in both past and future tasks. Most existing approaches rely on model-free methods with replay buffers to mitigate catastrophic forgetting; however, these solutions often face significant scalability challenges due to large memory demands. Drawing inspiration from neuroscience, where the brain replays experiences to a predictive World Model rather than directly to the policy, we present ARROW (Augmented Replay for RObust World models), a model-based continual RL algorithm that extends DreamerV3 with a memory-efficient, distribution-matching replay buffer. Unlike standard fixed-size FIFO buffers, ARROW maintains two complementary buffers: a short-term buffer for recent experiences and a long-term buffer that preserves task diversity through intelligent sampling. We evaluate ARROW on two challenging continual RL settings: Tasks without shared structure (Atari), and tasks with shared structure, where knowledge transfer is possible (Procgen CoinRun variants). Compared to model-free and model-based baselines with replay buffers of the same-size, ARROW demonstrates substantially less forgetting on tasks without shared structure, while maintaining comparable forward transfer. Our findings highlight the potential of model-based RL and bio-inspired approaches for continual reinforcement learning, warranting further research.