Rethinking Data Mixing from the Perspective of Large Language Models

arXiv cs.CL / 4/10/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that data mixing (domain sampling and weighting) is critical to LLM training and that poor strategies can noticeably hurt generalization.
  • It addresses open questions about how to define a “domain,” whether humans and models perceive domains consistently, and how domain weighting affects generalization.
  • The authors provide a theoretical framework linking gradient dynamics to domain distributions to explain how domains influence training behavior.
  • Based on the analysis, they introduce DoGraph, which treats data scheduling as a graph-constrained reweighting/optimization problem.
  • Experiments on GPT-2 variants across multiple scales show DoGraph delivers consistently competitive performance compared with existing approaches.

Abstract

Data mixing strategy is essential for large language model (LLM) training. Empirical evidence shows that inappropriate strategies can significantly reduce generalization. Although recent methods have improved empirical performance, several fundamental questions remain open: what constitutes a domain, whether human and model perceptions of domains are aligned, and how domain weighting influences generalization. We address these questions by establishing formal connections between gradient dynamics and domain distributions, offering a theoretical framework that clarifies the role of domains in training dynamics. Building on this analysis, we introduce DoGraph, a reweighting framework that formulates data scheduling as a graph-constrained optimization problem. Extensive experiments on GPT-2 models of varying scales demonstrate that DoGraph consistently achieves competitive performance.