Hypothesis Class Determines Explanation: Why Accurate Models Disagree on Feature Attribution
arXiv cs.LG / 3/18/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper challenges the assumption that prediction-equivalent models produce equivalent explanations, showing substantial attribution differences.
- A large-scale empirical study across 24 datasets and multiple model classes finds that models with identical predictive behavior can have very different feature attributions.
- The attribution disagreement is structured: high agreement within the same hypothesis class but low across classes (e.g., tree-based vs linear), often near the lottery threshold.
- The authors introduce the Explanation Lottery and prove an Agreement Gap that persists even with interactions in the data-generating process, and they propose the Explanation Reliability Score R(x) to diagnose stability without retraining.
- The results imply that model selection is not explanation-neutral— the chosen hypothesis class can determine which features are attributed to a decision, impacting auditing and regulatory evaluation.
Related Articles
I Was Wrong About AI Coding Assistants. Here's What Changed My Mind (and What I Built About It).
Dev.to

Interesting loop
Reddit r/LocalLLaMA
Qwen3.5-122B-A10B Uncensored (Aggressive) — GGUF Release + new K_P Quants
Reddit r/LocalLLaMA
A supervisor or "manager" Al agent is the wrong way to control Al
Reddit r/artificial
FeatherOps: Fast fp8 matmul on RDNA3 without native fp8
Reddit r/LocalLLaMA