Rewarding the Scientific Process: Process-Level Reward Modeling for Agentic Data Analysis

arXiv cs.CL / 4/28/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper finds that existing general-domain Process Reward Models (PRMs) do not reliably supervise agentic data analysis, often missing “silent errors” and mispenalizing necessary exploration steps.
  • To address this, it introduces DataPRM, a new environment-aware generative process reward model that actively probes intermediate execution states and detects silent failures.
  • DataPRM uses a reflection-aware ternary reward strategy to separate correctable grounding errors from irrecoverable mistakes, improving alignment with real execution quality.
  • The authors build a large training set (8K+ high-quality instances) with diversity-driven trajectory generation and knowledge-augmented step-level annotation, and show performance gains for downstream policy LLMs.
  • Integrating DataPRM into reinforcement learning improves benchmarks substantially (e.g., 78.73% on DABench and 64.84% on TableBench), indicating process-level reward supervision is effective for data analysis agents.

Abstract

Process Reward Models (PRMs) have achieved remarkable success in augmenting the reasoning capabilities of Large Language Models (LLMs) within static domains such as mathematics. However, their potential in dynamic data analysis tasks remains underexplored. In this work, we first present a empirical study revealing that general-domain PRMs struggle to supervise data analysis agents. Specifically, they fail to detect silent errors, logical flaws that yield incorrect results without triggering interpreter exceptions, and erroneously penalize exploratory actions, mistaking necessary trial-and-error exploration for grounding failures. To bridge this gap, we introduce DataPRM, a novel environment-aware generative process reward model that (1) can serve as an active verifier, autonomously interacting with the environment to probe intermediate execution states and uncover silent errors, and (2) employs a reflection-aware ternary reward strategy that distinguishes between correctable grounding errors and irrecoverable mistakes. We design a scalable pipeline to construct over 8K high-quality training instances for DataPRM via diversity-driven trajectory generation and knowledge-augmented step-level annotation. Experimental results demonstrate that DataPRM improves downstream policy LLMs by 7.21% on ScienceAgentBench and 11.28% on DABStep using Best-of-N inference. Notably, with only 4B parameters, DataPRM outperforms strong baselines, and exhibits robust generalizability across diverse Test-Time Scaling strategies. Furthermore, integrating DataPRM into Reinforcement Learning yields substantial gains over outcome-reward baselines, achieving 78.73% on DABench and 64.84% on TableBench, validating the effectiveness of process reward supervision. Code is available at https://github.com/zjunlp/DataMind.

Rewarding the Scientific Process: Process-Level Reward Modeling for Agentic Data Analysis | AI Navigate