Ceci n'est pas une explication: Evaluating Explanation Failures as Explainability Pitfalls in Language Learning Systems

arXiv cs.AI / 4/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • AI language-learning systems can provide personalized feedback, but it may fail in ways learners and even teachers cannot easily detect, leading to reinforced misconceptions and worse outcomes over time.
  • The paper introduces an L2-Bench benchmark concept to evaluate feedback quality across six dimensions, including diagnostic accuracy, appropriateness awareness, error causes, prioritization, improvement guidance, and support for self-regulation.
  • The authors identify “explainability pitfalls” where AI-generated explanations look helpful yet are fundamentally flawed, increasing risks to student attainment, human–AI interactions, and socioemotional well-being.
  • Language-learning-specific context can amplify these risks, and the paper calls for more attention to open questions when designing evaluation frameworks for AI explanations in this domain.

Abstract

AI-powered language learning tools increasingly provide instant, personalised feedback to millions of learners worldwide. However, this feedback can fail in ways that are difficult for learners--and even teachers--to detect, potentially reinforcing misconceptions and eroding learning outcomes over extended use. We present a portion of L2-Bench, a benchmark for evaluating AI systems in language education that includes (but is not limited to) six critical dimensions of effective feedback: diagnostic accuracy, awareness of appropriacy, causes of error, prioritisation, guidance for improvement, and supporting self-regulation. We analyse how AI systems can fail with respect to these dimensions. These failures, which we argue are conducive to "explainability pitfalls," are AI-generated explanations that appear helpful on the surface but are fundamentally flawed, increasing the risk of attainment, human-AI interaction, and socioaffective harms. We discuss how the specific context of language learning amplifies these risks and outline open questions we believe merit more attention when designing evaluation frameworks specifically. Our analysis aims to expand the community's understanding of both the typology of explainability pitfalls and the contextual dynamics in which they may occur in order to encourage AI developers to better design safe, trustworthy, and effective AI explanations.