CoAX: Cognitive-Oriented Attribution eXplanation User Model of Human Understanding of AI Explanations
arXiv cs.AI / 5/1/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates why Explainable AI (XAI) explanations often fail to improve real user understanding and decision-making despite prior advances.
- It focuses on cognitive-oriented reasoning for structured (tabular) data, comparing reasoning approaches for different XAI methods (none, feature importance, and feature attribution) in a forward-simulation decision task.
- Researchers collected human reasoning strategies via a formative user study and human decisions via a summative user study to ground the evaluation.
- Using cognitive modeling, the authors implement the underlying processes for each strategy and find that their cognitive models better match human decisions than machine-learning baseline proxies.
- They show how the fitted cognitive model can generate testable hypotheses and reduce reliance on expensive human-subject experiments, supporting future improvements to XAI usability and interpretability.
Related Articles
Every handle invocation on BizNode gets a WFID — a universal transaction reference for accountability. Full audit trail,...
Dev.to
I deployed AI agents across AWS, GCP, and Azure without a VPN. Here is how it works.
Dev.to
Panduan Lengkap TestSprite MCP Server — Dokumentasi Getting Started dalam Bahasa Indonesia
Dev.to
AI made learning fun again
Dev.to
MCP, Skills, AI Agents, and New Models: The New Stack for Software Development
Dev.to