AI Navigate

Scaling Generalist Data-Analytic Agents

arXiv cs.CL / 3/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • DataMind is proposed as a scalable data synthesis and agent-training recipe to build generalist data-analytic agents, addressing the limitations of open-source models on diverse data formats and long-horizon reasoning.
  • The approach includes a fine-grained task taxonomy with recursive easy-to-hard composition, a knowledge-augmented trajectory sampling strategy with model- and rule-based filtering, a memory-efficient multi-turn rollout framework, and a training objective that mixes supervised fine-tuning and reinforcement learning.
  • On DataMind-12K data, DataMind-14B achieves state-of-the-art performance across multiple data analysis benchmarks, outperforming proprietary baselines such as DeepSeek-V3.1 and GPT-5, while DataMind-7B remains the top-performing open-source model.
  • The authors plan to release DataMind-12K and DataMind-7B,14B to the community to support future research and evaluation.
  • They also offer empirical insights from exploratory trials to guide agentic training for researchers and practitioners.

Abstract

Data-analytic agents are emerging as a key catalyst for automated scientific discovery and for the vision of Innovating AI. Current approaches, however, rely heavily on prompt engineering over proprietary models, while open-source models struggle to face diverse-format, large-scale data files and long-horizon, multi-step reasoning that real-world analytics demands. This paper introduces DataMind, a scalable data synthesis and agent training recipe designed to build generalist data-analytic agents. DataMind tackles three key challenges in building open-source data-analytic agents, including insufficient data resources, improper training strategy, and unstable code-based multi-turn rollout. Concretely, DataMind applies 1) a fine-grained task taxonomy and a recursive easy-to-hard task composition mechanism to increase the diversity and difficulty of synthesized queries; 2) a knowledge-augmented trajectory sampling strategy followed by model-based and rule-based filtering; 3) a dynamically adjustable training objective combining both SFT and RL losses; 4) a memory-frugal and stable code-based multi-turn rollout framework. Built on DataMind, we curate DataMind-12K, a high-quality trajectory set spanning diverse domains, task categories, and data file formats for data-analytic tasks. Trained on DataMind-12K, our DataMind-14B achieves state-of-the-art with an average score of 71.16% on multiple data analysis benchmarks, outperforming the strongest proprietary baselines DeepSeek-V3.1 and GPT-5. Our DataMind-7B also performs best among all open-source models with a score of 68.10%. We also incorporate some empirical insights gained from our exploratory trials into the analysis experiments, aiming to provide actionable insights about agentic training for the community. We will release DataMind-12K and DataMind-7B,14B for the community's future research.