Multi-objective Evolutionary Merging Enables Efficient Reasoning Models

arXiv cs.CL / 4/9/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the Long-to-Short (L2S) reasoning problem by aiming to keep high accuracy while generating fewer tokens, reducing the inference-time cost of long chain-of-thought reasoning.
  • It introduces Evo-L2S, which reformulates L2S model merging as a multi-objective optimization problem and uses evolutionary model merging to explicitly optimize the accuracy–output-length trade-off via a Pareto front of merged models.
  • To make evolutionary search feasible for large language models, the method uses an entropy-based subset sampling approach to cut the overhead of fitness estimation.
  • Experiments on reasoning benchmarks across 1.5B, 7B, and 14B model sizes show that Evo-L2S can cut reasoning trace lengths by more than 50% while maintaining or improving accuracy versus the original reasoning models.

Abstract

Reasoning models have demonstrated remarkable capabilities in solving complex problems by leveraging long chains of thought. However, this more deliberate reasoning comes with substantial computational overhead at inference time. The Long-to-Short (L2S) reasoning problem seeks to maintain high accuracy using fewer tokens, but current training-free model merging approaches rely on scalarized, fixed-hyperparameter arithmetic methods that are highly brittle and force suboptimal compromises. To address this gap, we introduce Evo-L2S, a novel framework that formulates L2S reasoning as a multi-objective optimization challenge. By leveraging evolutionary model merging, Evo-L2S explicitly optimizes the trade-off between accuracy and output length to produce a robust Pareto front of merged models. To make this search computationally tractable for large language models, we propose an entropy-based subset sampling technique that drastically reduces the overhead of fitness estimation. Comprehensive experiments across 1.5B, 7B, and 14B parameter scales on six mathematical reasoning benchmarks demonstrate that Evo-L2S can reduce the length of generated reasoning traces by over 50% while preserving, or even improving, the problem-solving accuracy of the original reasoning models.