Matching Accuracy, Different Geometry: Evolution Strategies vs GRPO in LLM Post-Training

arXiv cs.LG / 4/3/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper compares Evolution Strategies (ES) and Group Relative Policy Optimization (GRPO) for LLM post-training across four tasks in both single-task and sequential continual-learning setups.
  • While ES matches or exceeds GRPO in single-task accuracy and stays competitive in sequential settings when the iteration budget is controlled, the underlying parameter-space updates differ substantially.
  • ES performs much larger update steps and causes broader off-task KL drift, whereas GRPO produces smaller, more localized updates.
  • The authors find that ES and GRPO end up at linearly connected solutions with no loss barrier despite update directions being nearly orthogonal, and they provide an analytical ES theory to explain this geometry and progress tradeoff.
  • They highlight implications for forgetting and knowledge preservation, and release accompanying code for reproducing and extending the results.

Abstract

Evolution Strategies (ES) have emerged as a scalable gradient-free alternative to reinforcement learning based LLM fine-tuning, but it remains unclear whether comparable task performance implies comparable solutions in parameter space. We compare ES and Group Relative Policy Optimization (GRPO) across four tasks in both single-task and sequential continual-learning settings. ES matches or exceeds GRPO in single-task accuracy and remains competitive sequentially when its iteration budget is controlled. Despite this similarity in task performance, the two methods produce markedly different model updates: ES makes much larger changes and induces broader off-task KL drift, whereas GRPO makes smaller, more localized updates. Strikingly, the ES and GRPO solutions are linearly connected with no loss barrier, even though their update directions are nearly orthogonal. We develop an analytical theory of ES that explains all these phenomena within a unified framework, showing how ES can accumulate large off-task movement on weakly informative directions while still making enough progress on the task to match gradient-based RL in downstream accuracy. These results show that gradient-free and gradient-based fine-tuning can reach similarly accurate yet geometrically distinct solutions, with important consequences for forgetting and knowledge preservation. The source code is publicly available: https://github.com/Bhoy1/ESvsGRPO.