Explainable Human Activity Recognition: A Unified Review of Concepts and Mechanisms

arXiv cs.LG / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper reviews explainable human activity recognition (XAI-HAR) approaches aimed at making deep learning-based HAR models more transparent and trustworthy for real-world deployment.
  • It proposes a unified framework that distinguishes conceptual dimensions of explainability from the specific algorithmic explanation mechanisms used in different HAR settings.
  • The authors present a mechanism-centric taxonomy spanning wearable, ambient, physiological, and multimodal sensing scenarios, accounting for HAR’s temporal, multimodal, and semantic complexities.
  • The review summarizes interpretability goals, explanation targets, limitations, and how current evaluation practices measure XAI-HAR reliability.
  • It identifies key challenges for achieving dependable, deployable XAI-HAR and outlines research directions toward more human-centered activity recognition systems.

Abstract

Human activity recognition (HAR) has become a key component of intelligent systems for healthcare monitoring, assistive living, smart environments, and human-computer interaction. Although deep learning has substantially improved HAR performance on multivariate sensor data, the resulting models often remain opaque, limiting trust, reliability, and real-world deployment. Explainable artificial intelligence (XAI) has therefore emerged as a critical direction for making HAR systems more transparent and human-centered. This paper presents a comprehensive review of explainable HAR methods across wearable, ambient, physiological, and multimodal sensing settings. We introduce a unified perspective that separates conceptual dimensions of explainability from algorithmic explanation mechanisms, reducing ambiguities in prior surveys. Building on this distinction, we present a mechanism-centric taxonomy of XAI-HAR methods covering major explanation paradigms. The review examines how these methods address the temporal, multimodal, and semantic complexities of HAR, and summarize their interpretability objectives, explanation targets, and limitations. In addition, we discuss current evaluation practices, highlight key challenges in achieving reliable and deployable XAI-HAR, and outline directions toward trustworthy activity recognition systems that better support human understanding and decision-making.