Efficient Embedding-based Synthetic Data Generation for Complex Reasoning Tasks

arXiv cs.AI / 3/25/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines how synthetic data generated with LLM-based SDG can fail quality/diversity targets and studies that behavior in embedding space.
  • It finds a strong relationship between local example density (within embedding neighborhoods) and prediction accuracy on samples drawn from those regions.
  • Using this insight, the authors propose a targeted embedding-based sampling pipeline designed to increase diversity and better cover complex reasoning task distributions.
  • The approach is reported to consistently improve performance across multiple benchmarks while aiming to control diversity and representativeness of generated examples.

Abstract

Synthetic Data Generation (SDG), leveraging Large Language Models (LLMs), has recently been recognized and broadly adopted as an effective approach to improve the performance of smaller but more resource and compute efficient LLMs through fine-tuning. A key challenge in SDG is ensuring the quality and diversity of the generated data. In this paper, we analyze the diversity and distribution of generated data in the embedding space, and demonstrate a strong correlation between the density of examples within a specific neighborhood and the accuracy of predictions on examples drawn from that region. Building on this insight, we present a targeted pipeline for embedding-based sampling that enhances data diversity and consistently improves performance across several benchmarks.

Efficient Embedding-based Synthetic Data Generation for Complex Reasoning Tasks | AI Navigate