ArgBench: Benchmarking LLMs on Computational Argumentation Tasks

arXiv cs.CL / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces ArgBench, the first standardized benchmark for evaluating LLM-based computational argumentation across 33 datasets consolidated into a unified format.
  • Using ArgBench, the authors assess five LLM families on 46 computational argumentation tasks spanning argument mining, perspective assessment, argument quality evaluation, argument reasoning, and argument generation.
  • The study performs a systematic analysis of what drives performance, including the impact of few-shot prompting examples, reasoning steps, model size, and training-related skills.
  • Overall, ArgBench is positioned as a reusable evaluation resource to measure how well LLMs develop and generalize argumentation capabilities for practical and safety-oriented applications.

Abstract

Argumentation skills are an essential toolkit for large language models (LLMs). These skills are crucial in various use cases, including self-reflection, debating collaboratively for diverse answers, and countering hate speech. In this paper, we create the first benchmark for a standardized evaluation of LLM-based approaches to computational argumentation, encompassing 33 datasets from previous work in unified form. Using the benchmark, we evaluate the generalizability of five LLM families across 46 computational argumentation tasks that cover mining arguments, assessing perspectives, assessing argument quality, reasoning about arguments, and generating arguments. On the benchmark, we conduct an extensive systematic analysis of the contribution of few-shot examples, reasoning steps, model size, and training skills to the performance of LLMs on the computational argumentation tasks in the benchmark.