AI Navigate

Argumentation for Explainable and Globally Contestable Decision Support with LLMs

arXiv cs.AI / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • ArgEval introduces a framework shifting from instance-specific reasoning to structured evaluation of general decision options using option ontologies and general argumentation frameworks (AFs) for each option.
  • The approach enables explainable recommendations for specific cases while allowing global contestability through modification of shared AFs, addressing opacity and unpredictability of LLMs in high-stakes domains.
  • The framework maps task-specific decision spaces and builds AFs that can be instantiated for case-level guidance and updated to reflect new evidence or preferences, enabling iterative improvement.
  • Evaluation on glioblastoma treatment demonstrates alignment with clinical practice and improved explainability, suggesting potential broader applicability in decision support.

Abstract

Large language models (LLMs) exhibit strong general capabilities, but their deployment in high-stakes domains is hindered by their opacity and unpredictability. Recent work has taken meaningful steps towards addressing these issues by augmenting LLMs with post-hoc reasoning based on computational argumentation, providing faithful explanations and enabling users to contest incorrect decisions. However, this paradigm is limited to pre-defined binary choices and only supports local contestation for specific instances, leaving the underlying decision logic unchanged and prone to repeated mistakes. In this paper, we introduce ArgEval, a framework that shifts from instance-specific reasoning to structured evaluation of general decision options. Rather than mining arguments solely for individual cases, ArgEval systematically maps task-specific decision spaces, builds corresponding option ontologies, and constructs general argumentation frameworks (AFs) for each option. These frameworks can then be instantiated to provide explainable recommendations for specific cases while still supporting global contestability through modification of the shared AFs. We investigate the effectiveness of ArgEval on treatment recommendation for glioblastoma, an aggressive brain tumour, and show that it can produce explainable guidance aligned with clinical practice.