AI Navigate

Argumentative Human-AI Decision-Making: Toward AI Agents That Reason With Us, Not For Us

arXiv cs.AI / 3/18/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Argumentative Human-AI Decision-Making by combining computational argumentation with large language models to enable interactive, contestable reasoning rather than opaque justification.
  • It identifies three core components: argumentation framework mining, argumentation framework synthesis, and argumentative reasoning to support dialectical human-AI decision processes.
  • The authors argue this approach fosters transparency, trust, and human-aware AI for high-stakes domains by enabling decisions to be contested and revised through dialog.
  • This paradigm envisions AI agents that reason with humans, instead of making decisions for them, potentially altering workflows for engineers, designers, product managers, and other roles.

Abstract

Computational argumentation offers formal frameworks for transparent, verifiable reasoning but has traditionally been limited by its reliance on domain-specific information and extensive feature engineering. In contrast, LLMs excel at processing unstructured text, yet their opaque nature makes their reasoning difficult to evaluate and trust. We argue that the convergence of these fields will lay the foundation for a new paradigm: Argumentative Human-AI Decision-Making. We analyze how the synergy of argumentation framework mining, argumentation framework synthesis, and argumentative reasoning enables agents that do not just justify decisions, but engage in dialectical processes where decisions are contestable and revisable -- reasoning with humans rather than for them. This convergence of computational argumentation and LLMs is essential for human-aware, trustworthy AI in high-stakes domains.