Explainable AI for Blind and Low-Vision Users: Navigating Trust, Modality, and Interpretability in the Agentic Era

arXiv cs.AI / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that Explainable AI is still largely designed around visual interfaces, leaving blind and low-vision (BLV) users without accessible explanations for trustworthy assistive AI.
  • It highlights that the shift toward agentic systems (multi-step, long-horizon decision-making) makes undetected errors harder to correct, increasing the need for better interpretability and accountability.
  • Through interviews and research analysis, the authors identify a modality gap: BLV users value conversational explanations but often experience “self-blame” when AI fails.
  • The work proposes a research agenda for accessible XAI in agentic settings, emphasizing multimodal interfaces, blame-aware explanation design, and participatory (user-involved) development.

Abstract

Explainable Artificial Intelligence (XAI) is critical for ensuring trust and accountability, yet its development remains predominantly visual. For blind and low-vision (BLV) users, the lack of accessible explanations creates a fundamental barrier to the independent use of AI-driven assistive technologies. This problem intensifies as AI systems shift from single-query tools into autonomous agents that take multi-step actions and make consequential decisions across extended task horizons, where a single undetected error can propagate irreversibly before any feedback is available. This paper investigates the unique XAI requirements of the BLV community through a comprehensive analysis of user interviews and contemporary research. By examining usage patterns across environmental perception and decision support, we identify a significant modality gap. Empirical evidence suggests that while BLV users highly value conversational explanations, they frequently experience "self-blame" for AI failures. The paper concludes with a research agenda for accessible Explainable AI in agentic systems, advocating for multimodal interfaces, blame-aware explanation design, and participatory development.