Empirical Characterization of Rationale Stability Under Controlled Perturbations for Explainable Pattern Recognition

arXiv cs.AI / 4/7/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a new metric to quantify rationale (explanation) stability across inputs that share the same label and under label-preserving perturbations, addressing a gap in instance-centric XAI evaluation.
  • It implements the metric using SHAP-based feature importance computed from a pre-trained BERT model on SST-2, with robustness checks across RoBERTa, DistilBERT, and IMDB.
  • The metric measures cosine similarity of SHAP value vectors to detect inconsistent explanation behaviors, such as over-reliance on particular features or drifting reasoning for similar predictions.
  • Experiments test whether the metric can identify misaligned predictions and inconsistencies in explanations, benchmarking against standard fidelity metrics.
  • The work includes a publicly available codebase and is positioned as a framework for more robust verification of rationale stability in trustworthy pattern recognition systems.

Abstract

Reliable pattern recognition systems should exhibit consistent behavior across similar inputs, and their explanations should remain stable. However, most Explainable AI evaluations remain instance centric and do not explicitly quantify whether attribution patterns are consistent across samples that share the same class or represent small variations of the same input. In this work, we propose a novel metric aimed at assessing the consistency of model explanations, ensuring that models consistently reflect the intended objectives and consistency under label-preserving perturbations. We implement this metric using a pre-trained BERT model on the SST-2 sentiment analysis dataset, with additional robustness tests on RoBERTa, DistilBERT, and IMDB, applying SHAP to compute feature importance for various test samples. The proposed metric quantifies the cosine similarity of SHAP values for inputs with the same label, aiming to detect inconsistent behaviors, such as biased reliance on certain features or failure to maintain consistent reasoning for similar predictions. Through a series of experiments, we evaluate the ability of this metric to identify misaligned predictions and inconsistencies in model explanations. These experiments are compared against standard fidelity metrics to assess whether the new metric can effectively identify when a model's behavior deviates from its intended objectives. The proposed framework provides a deeper understanding of model behavior by enabling more robust verification of rationale stability, which is critical for building trustworthy AI systems. By quantifying whether models rely on consistent attribution patterns for similar inputs, the proposed approach supports more robust evaluation of model behavior in practical pattern recognition pipelines. Our code is publicly available at https://github.com/anmspro/ESS-XAI-Stability.