Using Learning Theories to Evolve Human-Centered XAI: Future Perspectives and Challenges

arXiv cs.AI / 4/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that as AI systems become larger and more complex, explaining them effectively becomes harder, raising fundamental questions about why we explain AI and what exactly should be explained.
  • It proposes integrating theories of learning into the XAI lifecycle, aiming to make AI explanations more supportive of how humans learn.
  • The authors advocate for a learner-centered approach to XAI to improve human agency and to better manage or mitigate risks associated with explanations.
  • The work highlights both opportunities and challenges in adopting this learner-centered framework for assessing, designing, and evaluating explanation methods.

Abstract

As Artificial Intelligence (AI) systems continue to grow in size and complexity, so does the difficulty of the quest for AI transparency. In a world of large models and complex AI systems, why do we explain AI and what should we explain? While explanations serve multiple functions, in the face of complexity humans have used and continue to use explanations to foster learning. In this position paper, we discuss how learning theories can be infused in the XAI lifecycle, as well as the key opportunities and challenges when adopting a learner-centered approach to assess, design and evaluate AI explanations. Building on past work, we argue that a learner-centered approach to Explainable AI (XAI) can enhance human agency and ease XAI risks mitigation, helping evolve the practice of human-centered XAI.