CoAX: Cognitive-Oriented Attribution eXplanation User Model of Human Understanding of AI Explanations

arXiv cs.AI / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates why Explainable AI (XAI) explanations often fail to improve real user understanding and decision-making despite prior advances.
  • It focuses on cognitive-oriented reasoning for structured (tabular) data, comparing reasoning approaches for different XAI methods (none, feature importance, and feature attribution) in a forward-simulation decision task.
  • Researchers collected human reasoning strategies via a formative user study and human decisions via a summative user study to ground the evaluation.
  • Using cognitive modeling, the authors implement the underlying processes for each strategy and find that their cognitive models better match human decisions than machine-learning baseline proxies.
  • They show how the fitted cognitive model can generate testable hypotheses and reduce reliance on expensive human-subject experiments, supporting future improvements to XAI usability and interpretability.

Abstract

Explainable AI (XAI) aims to improve user understanding and decisions when using AI models. However, despite innovations in XAI, recent user evaluations reveal that this goal remains elusive. Understanding human cognition can help explain why users struggle to effectively use AI explanations. Focusing on reasoning on structured (tabular) data, we examined various reasoning strategies for different XAI methods (none, feature importance, feature attribution) in the decision task of anticipating AI decisions (i.e., forward simulation). We i) elicited reasoning strategies from a formative user study, and ii) collected decisions from a summative user study. Using cognitive modeling, we implemented the processes underlying each reasoning strategy and evaluated their alignment with human decision-making. We found that our models better fit human decisions than baseline machine learning proxies, providing insights into which reasoning strategies are (in)effective. We then demonstrate how the fitted model can be used to form hypotheses and investigate research questions that are costly to study with real human participants. This work contributes to debugging human understanding of XAI, informing the future development of more usable and interpretable AI explanations.