Synthetic Eggs in Many Baskets: The Impact of Synthetic Data Diversity on LLM Fine-Tuning

arXiv cs.CL / 4/29/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how the diversity of synthetic-data sources affects the behavior of LLMs during fine-tuning, emphasizing three dimensions: distribution collapse, adversarial robustness, and self-preference bias.
  • Fine-tuning on synthetic data drawn from multiple, diverse sources helps mitigate distribution collapse, keeping the model’s output distribution broader and the generated text more diverse.
  • The research finds that both human and synthetic fine-tuning can remove safety safeguards, but synthetic fine-tuning shows a tendency toward higher output quality, potentially increasing both usability and risk.
  • Fine-tuning is also shown to reduce self-preference bias, with human data providing the strongest reduction and multi-source synthetic data following behind.

Abstract

As synthetic data becomes widely used in language model development, understanding its impact on model behavior is crucial. This paper investigates the impact of the diversity of sources of synthetic data on fine-tuned large language models. We focus on three key dimensions: distribution collapse, adversarial robustness, and self-preference bias. Our findings reveal that fine-tuning models on synthetic data from diverse sources can mitigate distribution collapse, preserving the breadth of the output distribution and the diversity of the output text. Furthermore, while both human and synthetic fine-tuning data can remove safeguards, we observe a tendency for higher output quality in the latter case, thus making outputs potentially more usable and dangerous. Finally, we also find evidence that fine-tuning reduces self-preference bias, with human data being the most effective, followed by multi-source synthetic data.