EvoSelect: Data-Efficient LLM Evolution for Targeted Task Adaptation

arXiv cs.CL / 4/30/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses how to adapt large language models to specific targeted tasks efficiently when high-quality human-labeled data is expensive and hard to scale.
  • It critiques the typical iterative generate–train loop because synthetic candidates can be noisy, redundant, or misaligned with the target task distribution, which can dilute learning signals.
  • EvoSelect introduces an iterative generate–select–train framework that adds a selection step before model updates to filter and choose better training data.
  • The method selects candidates by jointly considering task alignment (estimated via optimal transport using proxy gradient representations) and diversity (using a diversification mechanism to improve coverage and reduce redundancy).
  • Experiments across multiple benchmarks show EvoSelect improves adaptation effectiveness over prior data-selection approaches even when using weak or strong data generators.

Abstract

Adapting large language models (LLMs) to a targeted task efficiently and effectively remains a fundamental challenge. Such adaptation often requires iteratively improving the model toward a targeted task, yet collecting high-quality human-labeled data to support this process is costly and difficult to scale. As a result, synthetic data generation has emerged as a flexible and scalable alternative. One straightforward approach is through an iterative generation-training loop, where candidate data are synthesized through an external generator, the model is updated using these data and the process is repeated over iterations. However, generated samples can be noisy, highly redundant, or even misaligned with the targeted task distribution. Training indiscriminately on such data can dilute useful learning signals and even degrade model performance. To address this, we introduce a refined paradigm, namely an iterative generation-selection-training loop, which incorporates a selection step prior to model updates. Building on this paradigm, we propose EvoSelect, a data-efficient framework to evolve LLM effectively. Given candidate samples produced by the data generator, EvoSelect selects training data by jointly modeling targeted task alignment and diversity. We estimate task relevance through optimal transport with proxy gradient representations, which quantifies how well candidate samples align with the targeted task distribution. To mitigate redundancy, we incorporate a diversification mechanism that promotes coverage of complementary training samples. By interleaving alignment and diversification, EvoSelect enables progressive LLM evolution toward targeted tasks. Extensive experiments on various benchmarks demonstrate that with either weak or strong data generators, EvoSelect consistently improves adaptation efficacy over existing data selection methods.