SciVisAgentBench: A Benchmark for Evaluating Scientific Data Analysis and Visualization Agents

arXiv cs.AI / 4/1/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces SciVisAgentBench, a new benchmark aimed at evaluating agentic scientific data analysis and visualization (SciVis) systems in realistic multi-step settings.
  • The benchmark uses a structured taxonomy across four dimensions—application domain, data type, complexity level, and visualization operation—and currently includes 108 expert-crafted cases.
  • It proposes a multimodal, outcome-centric evaluation pipeline that combines LLM-based judging with deterministic components such as image metrics, code checkers, rule-based verifiers, and case-specific evaluators.
  • A validity study with 12 SciVis experts measures agreement between human judgments and LLM judges to support more reliable assessment.
  • The authors run evaluations on representative SciVis agents and general-purpose coding agents to establish baseline performance and identify capability gaps, and they position the benchmark as extensible/living for ongoing progress tracking.

Abstract

Recent advances in large language models (LLMs) have enabled agentic systems that translate natural language intent into executable scientific visualization (SciVis) tasks. Despite rapid progress, the community lacks a principled and reproducible benchmark for evaluating these emerging SciVis agents in realistic, multi-step analysis settings. We present SciVisAgentBench, a comprehensive and extensible benchmark for evaluating scientific data analysis and visualization agents. Our benchmark is grounded in a structured taxonomy spanning four dimensions: application domain, data type, complexity level, and visualization operation. It currently comprises 108 expert-crafted cases covering diverse SciVis scenarios. To enable reliable assessment, we introduce a multimodal outcome-centric evaluation pipeline that combines LLM-based judging with deterministic evaluators, including image-based metrics, code checkers, rule-based verifiers, and case-specific evaluators. We also conduct a validity study with 12 SciVis experts to examine the agreement between human and LLM judges. Using this framework, we evaluate representative SciVis agents and general-purpose coding agents to establish initial baselines and reveal capability gaps. SciVisAgentBench is designed as a living benchmark to support systematic comparison, diagnose failure modes, and drive progress in agentic SciVis. The benchmark is available at https://scivisagentbench.github.io/.