MinShap: A Modified Shapley Value Approach for Feature Selection

arXiv stat.ML / 4/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces MinShap, a modified Shapley value framework tailored for feature selection when relationships may be unknown, non-linear, and features can be highly dependent.
  • Unlike standard Shapley-based attribution that mixes direct and indirect effects, MinShap uses the minimum marginal contribution across feature permutations to better isolate usefulness of features for prediction.
  • The authors provide a theoretical justification based on a faithfulness assumption in DAGs and offer a guarantee related to MinShap’s Type I error.
  • Experiments (numerical simulations and real-data studies) indicate MinShap can outperform established feature selection methods such as LOCO, GCM, and Lasso in both accuracy and stability.
  • The work also proposes additional MinShap-related algorithms using a multiple-testing/p-value viewpoint to improve performance in low-sample regimes, along with further theoretical guarantees.

Abstract

Feature selection is a classical problem in statistics and machine learning, and it continues to remain an extremely challenging problem especially in the context of unknown non-linear relationships with dependent features. On the other hand, Shapley values are a classic solution concept from cooperative game theory that is widely used for feature attribution in general non-linear models with highly-dependent features. However, Shapley values are not naturally suited for feature selection since they tend to capture both direct effects from each feature to the response and indirect effects through other features. In this paper, we combine the advantages of Shapley values and adapt them to feature selection by proposing \emph{MinShap}, a modification of the Shapley value framework along with a suite of other related algorithms. In particular for MinShap, instead of taking the average marginal contributions over permutations of features, considers the minimum marginal contribution across permutations. We provide a theoretical foundation motivated by the faithfulness assumption in DAG (directed acyclic graphical models), a guarantee for the Type I error of MinShap, and show through numerical simulations and real data experiments that MinShap tends to outperform state-of-the-art feature selection algorithms such as LOCO, GCM and Lasso in terms of both accuracy and stability. We also introduce a suite of algorithms related to MinShap by using the multiple testing/p-value perspective that improves performance in lower-sample settings and provide supporting theoretical guarantees.