D-SPEAR: Dual-Stream Prioritized Experience Adaptive Replay for Stable Reinforcement Learninging Robotic Manipulation

arXiv cs.RO / 3/31/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces D-SPEAR, a dual-stream prioritized experience replay method designed to stabilize off-policy reinforcement learning for contact-rich, long-horizon robotic manipulation tasks.
  • D-SPEAR decouples actor and critic sampling within a shared replay buffer, using prioritized replay to improve critic value learning while updating the actor primarily with low-error transitions to prevent policy oscillations and collapses.
  • An adaptive “anchor” mechanism dynamically balances uniform vs. prioritized sampling based on the coefficient of variation of TD errors, aiming to maintain stable learning across training stages.
  • The critic objective uses a Huber-based formulation to improve robustness when rewards have heterogeneous scales, reducing sensitivity to outliers.
  • Experiments on robosuite tasks such as Block-Lifting and Door-Opening show D-SPEAR outperforming SAC, TD3, and DDPG in both final performance and training stability, with ablations supporting the separate contributions of the actor and critic replay streams.

Abstract

Robotic manipulation remains challenging for reinforcement learning due to contact-rich dynamics, long horizons, and training instability. Although off-policy actor-critic algorithms such as SAC and TD3 perform well in simulation, they often suffer from policy oscillations and performance collapse in realistic settings, partly due to experience replay strategies that ignore the differing data requirements of the actor and the critic. We propose D-SPEAR: Dual-Stream Prioritized Experience Adaptive Replay, a replay framework that decouples actor and critic sampling while maintaining a shared replay buffer. The critic leverages prioritized replay for efficient value learning, whereas the actor is updated using low-error transitions to stabilize policy optimization. An adaptive anchor mechanism balances uniform and prioritized sampling based on the coefficient of variation of TD errors, and a Huber-based critic objective further improves robustness under heterogeneous reward scales. We evaluate D-SPEAR on challenging robotic manipulation tasks from the robosuite benchmark, including Block-Lifting and Door-Opening. Results demonstrate that D-SPEAR consistently outperforms strong off-policy baselines, including SAC, TD3, and DDPG, in both final performance and training stability, with ablation studies confirming the complementary roles of the actorside and critic-side replay streams.