AI Navigate

Hypothesis Class Determines Explanation: Why Accurate Models Disagree on Feature Attribution

arXiv cs.LG / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper challenges the assumption that prediction-equivalent models produce equivalent explanations, showing substantial attribution differences.
  • A large-scale empirical study across 24 datasets and multiple model classes finds that models with identical predictive behavior can have very different feature attributions.
  • The attribution disagreement is structured: high agreement within the same hypothesis class but low across classes (e.g., tree-based vs linear), often near the lottery threshold.
  • The authors introduce the Explanation Lottery and prove an Agreement Gap that persists even with interactions in the data-generating process, and they propose the Explanation Reliability Score R(x) to diagnose stability without retraining.
  • The results imply that model selection is not explanation-neutral— the chosen hypothesis class can determine which features are attributed to a decision, impacting auditing and regulatory evaluation.

Abstract

The assumption that prediction-equivalent models produce equivalent explanations underlies many practices in explainable AI, including model selection, auditing, and regulatory evaluation. In this work, we show that this assumption does not hold. Through a large-scale empirical study across 24 datasets and multiple model classes, we find that models with identical predictive behavior can produce substantially different feature attributions. This disagreement is highly structured: models within the same hypothesis class exhibit strong agreement, while cross-class pairs (e.g., tree-based vs. linear) trained on identical data splits show substantially reduced agreement, consistently near or below the lottery threshold. We identify hypothesis class as the structural driver of this phenomenon, which we term the Explanation Lottery. We theoretically show that the resulting Agreement Gap persists under interaction structure in the data-generating process. This structural finding motivates a post-hoc diagnostic, the Explanation Reliability Score R(x), which predicts when explanations are stable across architectures without additional training. Our results demonstrate that model selection is not explanation-neutral: the hypothesis class chosen for deployment can determine which features are attributed responsibility for a decision.