DataClaw: A Process-Oriented Agent Benchmark for Exploratory Real-World Data Analysis

arXiv cs.AI / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces DataClaw, a new process-oriented benchmark designed to evaluate autonomous agents on exploratory real-world data analysis in underexplored, noisy environments.
  • DataClaw includes about 2.06 million records across enterprise, industry, and policy domains, preserving native data noise to better reflect real conditions.
  • The benchmark provides 492 cross-domain tasks based on think-tank consulting scenarios, with intermediate milestone annotations that enable evaluation of an agent’s reasoning process rather than only final answer accuracy.
  • Experiments with eight advanced LLMs indicate current agents are not yet reliable for this setting, with seven models scoring below 50% overall accuracy, and process analysis shows hidden partial progress and differing exploration strategies.
  • Overall, DataClaw is positioned as a diagnostic testbed with fewer data constraints to probe the capability limits of autonomous data-analysis agents.

Abstract

Evaluating autonomous data analysis agents requires testing their ability to perform exploratory analysis in underexplored data environments. However, many existing benchmarks emphasize final answer accuracy in prior-guided data settings and provide limited support for reasoning process evaluation. We introduce DataClaw, a process-oriented benchmark for exploratory real-world data analysis. DataClaw contains approximately 2.06 million real-world records across enterprise, industry and policy domains, with native data noise preserved. It further includes 492 cross-domain tasks derived from think-tank consulting scenarios, each annotated with intermediate milestones for process-level evaluation. These annotations allow DataClaw to measure how far an agent progresses and where its reasoning breaks down. Experiments with eight advanced LLMs show that current agents remain far from reliable in this setting, with seven models achieving below 50% overall accuracy. Process analysis further reveals partial progress hidden behind wrong answers and distinct exploration strategies across models. Overall, DataClaw provides a less data constrained diagnostic testbed for probing the capability boundaries of autonomous data-analysis agents.