AutoResearchBench: Benchmarking AI Agents on Complex Scientific Literature Discovery

arXiv cs.AI / 4/29/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • AutoResearchBench is introduced as a new benchmark specifically for evaluating AI agents’ ability to discover relevant scientific literature autonomously.
  • The benchmark includes two task types—Deep Research (progressively locating a target paper) and Wide Research (collecting a comprehensive set of papers that meet given conditions).
  • The authors argue AutoResearchBench is distinct from prior agentic web-browsing benchmarks because it is research- and concept-focused, requires fine-grained use of detailed information, and is open-ended with an unknown number of valid papers.
  • Results show even strong LLM-based agents perform poorly on these tasks (about 9.39% accuracy on Deep Research and 9.31% IoU on Wide Research), with many baselines under 5%, highlighting the difficulty.
  • The dataset, evaluation pipeline, and code are publicly released to support future research on autonomous scientific discovery.

Abstract

Autonomous scientific research is significantly advanced thanks to the development of AI agents. One key step in this process is finding the right scientific literature, whether to explore existing knowledge for a research problem, or to acquire evidence for verifying assumptions and supporting claims. To assess AI agents' capability in driving this process, we present AutoResearchBench, a dedicated benchmark for autonomous scientific literature discovery. AutoResearchBench consists of two complementary task types: (1) Deep Research, which requires tracking down a specific target paper through a progressive, multi-step probing process, and (2) Wide Research, which requires comprehensively collecting a set of papers satisfying given conditions. Compared to previous benchmarks on agentic web browsing, AutoResearchBench is distinguished along three dimensions: it is research-oriented, calling for in-depth comprehension of scientific concepts; literature-focused, demanding fine-grained utilization of detailed information; and open-ended, involving an unknown number of qualified papers and thus requiring deliberate reasoning and search throughout. These properties make AutoResearchBench uniquely suited for evaluating autonomous research capabilities, and extraordinarily challenging. Even the most powerful LLMs, despite having largely conquered general agentic web-browsing benchmarks such as BrowseComp, achieve only 9.39% accuracy on Deep Research and 9.31% IoU on Wide Research, while many other strong baselines fall below 5%. We publicly release the dataset and evaluation pipeline to facilitate future research in this direction. We publicly release the dataset, evaluation pipeline, and code at https://github.com/CherYou/AutoResearchBench.