AI Navigate

One-Eval: An Agentic System for Automated and Traceable LLM Evaluation

arXiv cs.CL / 3/11/2026

Tools & Practical Usage

Key Points

  • One-Eval is an automated, agentic evaluation system designed to handle large language model (LLM) evaluation workflows by converting natural language evaluation requests into executable and traceable processes.
  • The system integrates three main components: NL2Bench for structuring evaluation intents and personalized benchmark planning, BenchResolve for automatic dataset acquisition and schema normalization, and Metrics & Reporting for task-aware metric selection and comprehensive reporting.
  • One-Eval supports human-in-the-loop checkpoints for review, editing, and rollback, preserving evidence trails to facilitate debugging and auditability.
  • Experiments demonstrate that One-Eval can perform diverse end-to-end evaluations with minimal manual user effort, improving efficiency and reproducibility in industrial LLM deployment.
  • The framework is open-source and publicly available, promoting adoption and further development within the community.

Computer Science > Computation and Language

arXiv:2603.09821 (cs)
[Submitted on 10 Mar 2026]

Title:One-Eval: An Agentic System for Automated and Traceable LLM Evaluation

View a PDF of the paper titled One-Eval: An Agentic System for Automated and Traceable LLM Evaluation, by Chengyu Shen and 10 other authors
View PDF HTML (experimental)
Abstract:Reliable evaluation is essential for developing and deploying large language models, yet in practice it often requires substantial manual effort: practitioners must identify appropriate benchmarks, reproduce heterogeneous evaluation codebases, configure dataset schema mappings, and interpret aggregated metrics. To address these challenges, we present One-Eval, an agentic evaluation system that converts natural-language evaluation requests into executable, traceable, and customizable evaluation workflows. One-Eval integrates (i) NL2Bench for intent structuring and personalized benchmark planning, (ii) BenchResolve for benchmark resolution, automatic dataset acquisition, and schema normalization to ensure executability, and (iii) Metrics \& Reporting for task-aware metric selection and decision-oriented reporting beyond scalar scores. The system further incorporates human-in-the-loop checkpoints for review, editing, and rollback, while preserving sample evidence trails for debugging and auditability. Experiments show that One-Eval can execute end-to-end evaluations from diverse natural-language requests with minimal user effort, supporting more efficient and reproducible evaluation in industrial settings. Our framework is publicly available at this https URL.
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2603.09821 [cs.CL]
  (or arXiv:2603.09821v1 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2603.09821
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Chengyu Shen [view email]
[v1] Tue, 10 Mar 2026 15:45:51 UTC (1,910 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CL
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.