Match or Replay: Self Imitating Proximal Policy Optimization

arXiv cs.LG / 3/31/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a self-imitating on-policy reinforcement learning algorithm (Match or Replay) aimed at improving exploration and sample efficiency, especially under sparse rewards.
  • It uses past high-reward state-action pairs to steer policy updates, prioritizing trajectories via optimal transport in dense-reward settings.
  • In sparse-reward environments, the method uniformly replays successful self-encountered trajectories to promote more structured exploration.
  • Experiments on MuJoCo (dense rewards), 3D Animal-AI Olympics (partially observable sparse rewards), and multi-goal PointMaze show faster convergence and higher success rates than existing self-imitating RL baselines.
  • The authors argue the approach is a robust exploration strategy for RL that could generalize to more complex tasks.

Abstract

Reinforcement Learning (RL) agents often struggle with inefficient exploration, particularly in environments with sparse rewards. Traditional exploration strategies can lead to slow learning and suboptimal performance because agents fail to systematically build on previously successful experiences, thereby reducing sample efficiency. To tackle this issue, we propose a self-imitating on-policy algorithm that enhances exploration and sample efficiency by leveraging past high-reward state-action pairs to guide policy updates. Our method incorporates self-imitation by using optimal transport distance in dense reward environments to prioritize state visitation distributions that match the most rewarding trajectory. In sparse-reward environments, we uniformly replay successful self-encountered trajectories to facilitate structured exploration. Experimental results across diverse environments demonstrate substantial improvements in learning efficiency, including MuJoCo for dense rewards and the partially observable 3D Animal-AI Olympics and multi-goal PointMaze for sparse rewards. Our approach achieves faster convergence and significantly higher success rates compared to state-of-the-art self-imitating RL baselines. These findings underscore the potential of self-imitation as a robust strategy for enhancing exploration in RL, with applicability to more complex tasks.