Synthetic Sandbox for Training Machine Learning Engineering Agents

arXiv cs.CL / 4/7/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that verifying machine learning engineering (MLE) agents is far more expensive than software engineering (SWE) agents because MLE verification requires running full ML pipelines (preprocessing, training, evaluation) at each rollout step.
  • It identifies the size of the sandbox data as the main bottleneck and proposes SandMLE, a multi-agent framework that creates diverse but micro-scale synthetic MLE environments from a small set of seed tasks.
  • By constraining each synthetic task to only 50–200 training samples while retaining real-world structural complexity, SandMLE makes trajectory-wise on-policy reinforcement learning feasible in the MLE domain.
  • Experiments show SandMLE cuts execution time by more than 13× and improves performance on MLE-bench-lite over supervised fine-tuning baselines across multiple model sizes (with relative medal-rate gains of 20.3%–66.9%).
  • The resulting policy also generalizes to unseen agentic scaffolds, improving HumanRank by up to 32.4% on MLE-Dojo.

Abstract

As large language model agents advance beyond software engineering (SWE) tasks toward machine learning engineering (MLE), verifying agent behavior becomes orders of magnitude more expensive: while SWE tasks can be verified via fast-executing unit tests, MLE verification requires running full ML pipelines -- data preprocessing, model training, and metric evaluation -- on large datasets at each rollout step, rendering trajectory-wise on-policy reinforcement learning (RL) prohibitively slow. Existing approaches retreat to supervised fine-tuning (SFT) or offline proxy rewards, sacrificing the exploration and generalization benefits of on-policy RL. We observe that sandbox data size is the primary source of this bottleneck. Based on this insight, we introduce SandMLE, a multi-agent framework that generates diverse, verifiable synthetic MLE environments from a small number of seed tasks, preserving the structural and technical complexity of real-world problems while constraining datasets to micro-scale (each task is paired with only 50-200 training samples). Through extensive experiments, we show that SandMLE reduces execution time by over 13 times, enabling large-scale, on-policy trajectory-wise RL for the first time in the MLE domain. On MLE-bench-lite, SandMLE yields significant gains over SFT baselines across Qwen3-8B, 14B, and 30B-A3B, with relative medal rate improvements ranging from 20.3% to 66.9%. Furthermore, the trained policy generalizes across unseen agentic scaffolds, achieving up to 32.4% better HumanRank score on MLE-Dojo.