Argumentative Human-AI Decision-Making: Toward AI Agents That Reason With Us, Not For Us
arXiv cs.AI / 3/18/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes Argumentative Human-AI Decision-Making by combining computational argumentation with large language models to enable interactive, contestable reasoning rather than opaque justification.
- It identifies three core components: argumentation framework mining, argumentation framework synthesis, and argumentative reasoning to support dialectical human-AI decision processes.
- The authors argue this approach fosters transparency, trust, and human-aware AI for high-stakes domains by enabling decisions to be contested and revised through dialog.
- This paradigm envisions AI agents that reason with humans, instead of making decisions for them, potentially altering workflows for engineers, designers, product managers, and other roles.
Related Articles
Is AI becoming a bubble, and could it end like the dot-com crash?
Reddit r/artificial

Externalizing State
Dev.to

I made a 'benchmark' where LLMs write code controlling units in a 1v1 RTS game.
Dev.to

My AI Does Not Have a Clock
Dev.to
How to settle on a coding LLM ? What parameters to watch out for ?
Reddit r/LocalLLaMA