A Bayesian Framework for Uncertainty-Aware Explanations in Power Quality Disturbance Classification

arXiv cs.LG / 4/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a limitation of existing explainable AI for power quality disturbance (PQD) classification: current XAI methods produce deterministic explanations that ignore uncertainty.
  • It proposes a Bayesian framework that generates a distribution over relevance attributions for each instance, yielding uncertainty-aware, instance-specific explanations.
  • The framework enables domain experts to choose explanation confidence levels (e.g., confidence percentiles) to better match interpretability to different disturbance types.
  • Experiments on both synthetic and real-world PQD datasets show improved transparency and reliability of PQD classifiers when using uncertainty-aware explanations.

Abstract

Advanced deep learning methods have shown remarkable success in power quality disturbance (PQD) classification. To enhance model transparency, explainable AI (XAI) techniques have been developed to provide instance-specific interpretations of classifier decisions. However, conventional XAI methods yield deterministic explanations, overlooking uncertainty and limiting reliability in safety-critical applications. This paper proposes a Bayesian explanation framework that models explanation uncertainty by generating a relevance attribution distribution for each instance. This method allows experts to select explanations based on confidence percentiles, thereby tailoring interpretability according to specific disturbance types. Extensive experiments on synthetic and real-world power quality datasets demonstrate that the proposed framework improves the transparency and reliability of PQD classifiers through uncertainty-aware explanations.