Matching Accuracy, Different Geometry: Evolution Strategies vs GRPO in LLM Post-Training
arXiv cs.LG / 4/3/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper compares Evolution Strategies (ES) and Group Relative Policy Optimization (GRPO) for LLM post-training across four tasks in both single-task and sequential continual-learning setups.
- While ES matches or exceeds GRPO in single-task accuracy and stays competitive in sequential settings when the iteration budget is controlled, the underlying parameter-space updates differ substantially.
- ES performs much larger update steps and causes broader off-task KL drift, whereas GRPO produces smaller, more localized updates.
- The authors find that ES and GRPO end up at linearly connected solutions with no loss barrier despite update directions being nearly orthogonal, and they provide an analytical ES theory to explain this geometry and progress tradeoff.
- They highlight implications for forgetting and knowledge preservation, and release accompanying code for reproducing and extending the results.




