Synthetic Data for any Differentiable Target

arXiv cs.CL / 4/10/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a reinforcement learning primitive called Dataset Policy Gradient (DPG) to optimize synthetic data generators so they produce datasets tailored to a chosen, differentiable target metric.
  • DPG uses higher-order gradients to compute data attribution and turns those attribution scores into policy-gradient rewards, closely approximating the otherwise intractable gradient for the generator.
  • When the generated data is used for supervised fine-tuning (SFT), the target language model is shown to improve on the selected differentiable metric, demonstrating controllable behavior via synthetic training.
  • The authors demonstrate concrete target shaping outcomes, including forcing the model’s LM head weights to embed specific patterns (e.g., a QR code and the pattern “67”) and to reduce weight
  • l2 norm.
  • They further show the generator can induce behaviors like rephrasing in a new language or producing a specific UUID even when those objectives are not present in the generator’s prompts, highlighting flexibility in controllable objectives.

Abstract

What are the limits of controlling language models via synthetic training data? We develop a reinforcement learning (RL) primitive, the Dataset Policy Gradient (DPG), which can precisely optimize synthetic data generators to produce a dataset of targeted examples. When used for supervised fine-tuning (SFT) of a target model, these examples cause the target model to do well on a differentiable metric of our choice. Our approach achieves this by taking exact data attribution via higher-order gradients and using those scores as policy gradient rewards. We prove that this procedure closely approximates the true, intractable gradient for the synthetic data generator. To illustrate the potential of DPG, we show that, using only SFT on generated examples, we can cause the target model's LM head weights to (1) embed a QR code, (2) embed the pattern \texttt{67}, and (3) have lower \ell^2 norm. We additionally show that we can cause the generator to (4) rephrase inputs in a new language and (5) produce a specific UUID, even though neither of these objectives is conveyed in the generator's input prompts. These findings suggest that DPG is a powerful and flexible technique for shaping model properties using only synthetic training examples.