AI Navigate

SAGE: Multi-Agent Self-Evolution for LLM Reasoning

arXiv cs.AI / 3/17/2026

📰 NewsModels & Research

Key Points

  • SAGE introduces a closed-loop multi-agent framework where four roles—Challenger, Planner, Solver, and Critic—co-evolve from a shared LLM backbone using only a small seed set.
  • The Challenger generates progressively harder tasks, the Planner converts tasks into structured multi-step plans, the Solver executes the plan, and the Critic scores and filters outcomes to prevent curriculum drift and maintain signal quality.
  • The method delivers consistent gains on math and code-generation benchmarks, with reported improvements of 8.9% on LiveCodeBench and 10.7% on OlympiadBench for the Qwen-2.5-7B model.
  • By relying on self-training with verifiable rewards and external verifiers, SAGE reduces dependence on large labeled datasets while improving long-horizon reasoning stability.

Abstract

Reinforcement learning with verifiable rewards improves reasoning in large language models (LLMs), but many methods still rely on large human-labeled datasets. While self-play reduces this dependency, it often lacks explicit planning and strong quality control, limiting stability in long-horizon multi-step reasoning. We present SAGE (Self-evolving Agents for Generalized reasoning Evolution), a closed-loop framework where four agents: Challenger, Planner, Solver, and Critic, co-evolve from a shared LLM backbone using only a small seed set. The Challenger continuously generates increasingly difficult tasks; the Planner converts each task into a structured multi-step plan; and the Solver follows the plan to produce an answer, whose correctness is determined by external verifiers. The Critic scores and filters both generated questions and plans to prevent curriculum drift and maintain training signal quality, enabling stable self-training. Across mathematics and code-generation benchmarks, SAGE delivers consistent gains across model scales, improving the Qwen-2.5-7B model by 8.9% on LiveCodeBench and 10.7% on OlympiadBench.