Discovering Novel LLM Experts via Task-Capability Coevolution

arXiv cs.AI / 4/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes AC/DC, a framework that uses open-ended coevolution of LLMs and tasks to discover increasingly novel skills in a single continuous run.
  • AC/DC evolves both components: it updates LLM populations through model merging and expands task diversity by generating natural-language tasks with synthetic data.
  • Experiments report that the resulting LLM archives can outperform larger models on capability coverage on downstream benchmarks while using less GPU memory, without any explicit benchmark optimization.
  • The authors claim AC/DC’s coverage improves over time and that it performs better in multi-agent best-of-N selection, supporting coevolution as a new paradigm for LLM development.
  • The work frames coevolution as a way to accelerate continual diversity improvements by leveraging existing base models as stepping stones toward more capable models.

Abstract

Frontier model developers aim to train models continually to possess emergent, diverse capabilities. To extend capabilities, the current pre-training and post-training paradigm requires manually starting training runs with static datasets or reward functions every time. Addressing this limitation, our work pursues the insight that open-endedness (via the coevolution of models and tasks) can discover models with increasingly novel skills in a single run. We introduce a new model development framework that extends coevolution to large language model (LLM) discovery, open-ended \textit{Assessment Coevolving with Diverse Capabilities} (AC/DC). AC/DC evolves both LLMs via model merging and natural language tasks via synthetic data generation. AC/DC discovers growing archives of LLMs that surpass the capabilities of larger LLMs while taking up less GPU memory. In particular, our LLM populations achieve a broader Coverage of expertise than other curated models or baselines on downstream benchmarks, without \textit{any} explicit benchmark optimization. Furthermore, AC/DC improves Coverage over time, continually innovates on tasks and models, and improves performance in multi-agent best-of-N selection. Our findings highlight the potential of coevolution as a means of discovering broader sets of capabilities from base LLMs. Overall, AC/DC brings us one step closer to a profoundly new paradigm of LLM development, where continual improvements to the diversity of model capabilities can be accelerated by leveraging existing models as stepping stones to increasingly powerful models.