AI Navigate

DEER: A Benchmark for Evaluating Deep Research Agents on Expert Report Generation

arXiv cs.CL / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • DEER is a newly proposed benchmark designed to evaluate expert-level deep research reports generated by large language models, addressing the multifaceted challenges of report quality assessment.
  • The benchmark incorporates an expert-developed taxonomy with 7 dimensions and 25 subdimensions, operationalized through 101 fine-grained rubric items, and provides Expert Evaluation Guidance to assist LLM-based judging.
  • DEER includes a claim verification architecture that checks both cited and uncited claims and quantifies the quality of evidence used in reports.
  • Experimental results reveal that current deep research systems can produce structurally sound reports with external citations but still lack in fully meeting expert-level user requests and achieving logical completeness.
  • This benchmark not only enables performance comparisons but also offers interpretable diagnostics to reveal system strengths and limitations, guiding future improvements in deep research agents.

Computer Science > Computation and Language

arXiv:2512.17776 (cs)
[Submitted on 19 Dec 2025 (v1), last revised 10 Mar 2026 (this version, v4)]

Title:DEER: A Benchmark for Evaluating Deep Research Agents on Expert Report Generation

View a PDF of the paper titled DEER: A Benchmark for Evaluating Deep Research Agents on Expert Report Generation, by Janghoon Han and 8 other authors
View PDF HTML (experimental)
Abstract:Recent advances in large language models have enabled deep research systems that generate expert-level reports through multi-step reasoning and evidence-based synthesis. However, evaluating such reports remains challenging: report quality is multifaceted, making it difficult to determine what to assess and by what criteria; LLM-based judges may miss errors that require domain expertise to identify; and because deep research relies on retrieved evidence, report-wide claim verification is also necessary. To address these issues, we propose DEER, a benchmark for evaluating expert-level deep research reports. DEER systematizes evaluation criteria with an expert-developed taxonomy (7 dimensions, 25 subdimensions) operationalized as 101 fine-grained rubric items. We also provide task-specific Expert Evaluation Guidance to support LLM-based judging. Alongside rubric-based assessment, we propose a claim verification architecture that verifies both cited and uncited claims and quantifies evidence quality. Experiments show that while current deep research systems can produce structurally plausible reports that cite external evidence, there is room for improvement in fulfilling expert-level user requests and achieving logical completeness. Beyond simple performance comparisons, DEER makes system strengths and limitations interpretable and provides diagnostic signals for improvement.
Comments:
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2512.17776 [cs.CL]
  (or arXiv:2512.17776v4 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2512.17776
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Janghoon Han [view email]
[v1] Fri, 19 Dec 2025 16:46:20 UTC (1,691 KB)
[v2] Fri, 16 Jan 2026 15:01:24 UTC (1,817 KB)
[v3] Tue, 3 Feb 2026 08:21:32 UTC (2,192 KB)
[v4] Tue, 10 Mar 2026 08:29:27 UTC (2,192 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CL
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.