AI Navigate

Towards Next-Generation LLM Training: From the Data-Centric Perspective

arXiv cs.CL / 3/17/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • LLM performance is tightly tied to data quality and management, and current practices rely on ad hoc data preparation with no scalable, reusable workflows.
  • The paper proposes a robust, agent-based automatic data preparation system to automate workflow construction and scalable data management.
  • It argues for a unified data–model interaction training system where data is dynamically selected, mixed, and reweighted during training to enable more efficient, adaptive utilization.
  • It discusses remaining challenges and outlines promising directions for future research and system development.

Abstract

Large language models (LLMs) have demonstrated remarkable performance across a wide range of tasks and domains, with data playing a central role in enabling these advances. Despite this success, the preparation and effective utilization of the massive datasets required for LLM training remain major bottlenecks. In current practice, LLM training data is often constructed using ad hoc scripts, and there is still a lack of mature, agent-based data preparation systems that can automatically construct robust and reusable data workflows, thereby freeing data scientists from repetitive and error-prone engineering efforts. Moreover, once collected, datasets are often consumed largely in their entirety during training, without systematic mechanisms for data selection, mixture optimization, or reweighting. To address these limitations, we advocate two complementary research directions. First, we propose building a robust, agent-based automatic data preparation system that supports automated workflow construction and scalable data management. Second, we argue for a unified data-model interaction training system in which data is dynamically selected, mixed, and reweighted throughout the training process, enabling more efficient, adaptive, and performance-aware data utilization. Finally, we discuss the remaining challenges and outline promising directions for future research and system development.