A comparative analysis of machine learning models in SHAP analysis
arXiv cs.LG / 4/9/2026
💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research
Key Points
- The paper argues that black-box machine learning models are increasingly common but are often hard to interpret, motivating the use of SHAP (SHapley Additive exPlanations) for feature-level explanation of predictions.
- It notes that SHAP value interpretation depends on the specific underlying model, which means there is no single universal SHAP analysis procedure.
- The authors provide a comparative investigation of SHAP analysis across different machine learning models and datasets to characterize the nuances in how SHAP outputs should be interpreted.
- The work includes a new generalization of the waterfall plot for multi-class classification problems to better visualize per-class/per-sample contribution breakdowns.



