ShapShift: Explaining Model Prediction Shifts with Subgroup Conditional Shapley Values
arXiv stat.ML / 4/14/2026
💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research
Key Points
- The paper introduces “ShapShift,” a Shapley-value-based method to explain how changes in input distributions shift a model’s average predictions.
- It attributes prediction shifts to changes in the conditional probabilities of interpretable data subgroups defined by decision-tree structure, first for single decision trees with exact explanations at split nodes.
- The method is extended to tree ensembles by selecting the most explanatory tree and modeling remaining residual effects.
- A model-agnostic variant uses surrogate trees trained with a new objective, enabling the approach to be applied to non-tree models such as neural networks.
- Although exact computation can be costly, the authors describe approximation techniques and report that the method yields simple, faithful, near-complete explanations useful for monitoring models in changing environments.
Related Articles

Black Hat USA
AI Business

Black Hat Asia
AI Business

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

10 ChatGPT Prompts Every Genetic Counselor Should Be Using in 2025
Dev.to

The Memory Wall Can't Be Killed — 3 Papers Proving Every Architecture Hits It
Dev.to