Evolution Strategies for Deep RL pretraining

arXiv cs.LG / 4/2/2026

📰 News

Key Points

  • The paper compares evolution strategies (ES), a derivative-free optimization method, with deep reinforcement learning (DRL) on tasks of increasing difficulty including Flappy Bird, Breakout, and MuJoCo environments.
  • It finds ES do not consistently outperform DRL in training speed, despite being simpler to deploy and potentially less computationally costly.
  • When ES is used as a preliminary pretraining step for DRL, it improves only in less complex settings (notably Flappy Bird), while providing minimal or no gains for harder tasks like Breakout and MuJoCo Walker.
  • Overall, the study suggests ES may be limited as a general-purpose pretraining accelerator for more demanding deep RL workloads, and their effectiveness depends strongly on task complexity.
  • The results raise questions about the suitability of ES for scaling to the most challenging decision-making problems where DRL excels.
  • categories: [

Abstract

Although Deep Reinforcement Learning has proven highly effective for complex decision-making problems, it demands significant computational resources and careful parameter adjustment in order to develop successful strategies. Evolution strategies offer a more straightforward, derivative-free approach that is less computationally costly and simpler to deploy. However, ES generally do not match the performance levels achieved by DRL, which calls into question their suitability for more demanding scenarios. This study examines the performance of ES and DRL across tasks of varying difficulty, including Flappy Bird, Breakout and Mujoco environments, as well as whether ES could be used for initial training to enhance DRL algorithms. The results indicate that ES do not consistently train faster than DRL. When used as a preliminary training step, they only provide benefits in less complex environments (Flappy Bird) and show minimal or no improvement in training efficiency or stability across different parameter settings when applied to more sophisticated tasks (Breakout and MuJoCo Walker).