Multi-objective Evolutionary Merging Enables Efficient Reasoning Models
arXiv cs.CL / 4/9/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses the Long-to-Short (L2S) reasoning problem by aiming to keep high accuracy while generating fewer tokens, reducing the inference-time cost of long chain-of-thought reasoning.
- It introduces Evo-L2S, which reformulates L2S model merging as a multi-objective optimization problem and uses evolutionary model merging to explicitly optimize the accuracy–output-length trade-off via a Pareto front of merged models.
- To make evolutionary search feasible for large language models, the method uses an entropy-based subset sampling approach to cut the overhead of fitness estimation.
- Experiments on reasoning benchmarks across 1.5B, 7B, and 14B model sizes show that Evo-L2S can cut reasoning trace lengths by more than 50% while maintaining or improving accuracy versus the original reasoning models.
Related Articles

Black Hat Asia
AI Business

Amazon CEO takes aim at Nvidia, Intel, Starlink, more in annual shareholder letter
TechCrunch

Why Anthropic’s new model has cybersecurity experts rattled
Reddit r/artificial
Does the AI 2027 paper still hold any legitimacy?
Reddit r/artificial

Why Most Productivity Systems Fail (And What to Do Instead)
Dev.to