AI Navigate

CyberThreat-Eval: Can Large Language Models Automate Real-World Threat Research?

arXiv cs.CL / 3/11/2026

Ideas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • CyberThreat-Eval is a new benchmark dataset derived from the real-world three-stage workflow of Cyber Threat Intelligence (CTI) analysts, covering triage, deep search, and threat intelligence drafting phases.
  • Unlike existing benchmarks, CyberThreat-Eval uses analyst-centric metrics such as factual accuracy, content quality, and operational costs instead of relying on multiple-choice questions or lexical similarity.
  • Evaluation on this benchmark reveals that current large language models (LLMs) struggle with nuanced expertise required in threat research and often cannot reliably distinguish correct from incorrect information.
  • To overcome these limitations, the proposed workflow integrates human expert feedback and external ground-truth databases, enabling iterative model improvement through human-in-the-loop refinement.
  • The CyberThreat-Eval dataset and code are publicly available on GitHub and HuggingFace, facilitating further research into automating practical CTI tasks using LLMs.

Computer Science > Cryptography and Security

arXiv:2603.09452 (cs)
[Submitted on 10 Mar 2026]

Title:CyberThreat-Eval: Can Large Language Models Automate Real-World Threat Research?

View a PDF of the paper titled CyberThreat-Eval: Can Large Language Models Automate Real-World Threat Research?, by Xiangsen Chen and 7 other authors
View PDF HTML (experimental)
Abstract:Analyzing Open Source Intelligence (OSINT) from large volumes of data is critical for drafting and publishing comprehensive CTI reports. This process usually follows a three-stage workflow -- triage, deep search and TI drafting. While Large Language Models (LLMs) offer a promising route toward automation, existing benchmarks still have limitations. These benchmarks often consist of tasks that do not reflect real-world analyst workflows. For example, human analysts rarely receive tasks in the form of multiple-choice questions. Also, existing benchmarks often rely on model-centric metrics that emphasize lexical overlap rather than actionable, detailed insights essential for security analysts. Moreover, they typically fail to cover the complete three-stage workflow. To address these issues, we introduce CyberThreat-Eval, which is collected from the daily CTI workflow of a world-leading company. This expert-annotated benchmark assesses LLMs on practical tasks across all three stages as mentioned above. It utilizes analyst-centric metrics that measure factual accuracy, content quality, and operational costs. Our evaluation using this benchmark reveals important insights into the limitations of current LLMs. For example, LLMs often lack the nuanced expertise required to handle complex details and struggle to distinguish between correct and incorrect information. To address these challenges, the CTI workflow incorporates both external ground-truth databases and human expert knowledge. TRA allows human experts to iteratively provide feedback for continuous improvement. The code is available at \href{this https URL}{\texttt{GitHub}} and \href{this https URL}{\texttt{HuggingFace}}.
Comments:
Subjects: Cryptography and Security (cs.CR); Computation and Language (cs.CL)
Cite as: arXiv:2603.09452 [cs.CR]
  (or arXiv:2603.09452v1 [cs.CR] for this version)
  https://doi.org/10.48550/arXiv.2603.09452
Focus to learn more
arXiv-issued DOI via DataCite
Journal reference: Transactions on Machine Learning Research (2025), ISSN 2835-8856

Submission history

From: Xiangsen Chen [view email]
[v1] Tue, 10 Mar 2026 10:04:12 UTC (1,560 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CR
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.