AI Navigate

DIVE: Scaling Diversity in Agentic Task Synthesis for Generalizable Tool Use

arXiv cs.AI / 3/13/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper argues that insufficient diversity in synthesized agentic tasks causes brittleness in generalization for post-training tool-using LLMs.
  • DIVE inverts the synthesis process by executing diverse, real-world tools first and deriving tasks only from the resulting traces, providing grounding by construction.
  • It scales diversity along two axes—tool-pool coverage and per-task toolset variety—and uses an evidence-collection loop to derive richer multi-step tool-use patterns across 373 tools in five domains.
  • Empirically, training Qwen3-8B on DIVE data yields +22 average points on 9 out-of-domain benchmarks and +68 over the strongest 8B baseline, with diversity scaling outperforming mere quantity scaling even with 4x less data.

Abstract

Recent work synthesizes agentic tasks for post-training tool-using LLMs, yet robust generalization under shifts in tasks and toolsets remains an open challenge. We trace this brittleness to insufficient diversity in synthesized tasks. Scaling diversity is difficult because training requires tasks to remain executable and verifiable, while generalization demands coverage of diverse tool types, toolset combinations, and heterogeneous tool-use patterns. We propose DIVE, an evidence-driven recipe that inverts synthesis order, executing diverse, real-world tools first and reverse-deriving tasks strictly entailed by the resulting traces, thereby providing grounding by construction. DIVE scales structural diversity along two controllable axes, tool-pool coverage and per-task toolset variety, and an Evidence Collection--Task Derivation loop further induces rich multi-step tool-use patterns across 373 tools in five domains. Training Qwen3-8B on DIVE data (48k SFT + 3.2k RL) improves by +22 average points across 9 OOD benchmarks and outperforms the strongest 8B baseline by +68. Remarkably, controlled scaling analysis reveals that diversity scaling consistently outperforms quantity scaling for OOD generalization, even with 4x less data.