Causally-Guided Diffusion for Stable Feature Selection

arXiv cs.LG / 2026/3/24

💬 オピニオンIdeas & Deep AnalysisModels & Research

要点

  • The paper argues that conventional feature selection methods often overfit to one data distribution and may pick spurious features that break under distribution shifts.
  • It introduces CGDFS, which frames stable feature selection as approximate posterior inference over feature subsets with objectives that jointly favor low prediction error and low cross-environment variance.
  • CGDFS uses a diffusion model as a learned prior over continuous selection masks, capturing structural dependencies among features while enabling scalable search over a large subset space.
  • The method applies guided, annealed Langevin sampling that combines the diffusion prior with a stability-aware likelihood inspired by causal invariance, avoiding hard discrete optimization.
  • Experiments on real datasets with distribution shifts show CGDFS selects more stable and transferable features and improves out-of-distribution performance versus several sparsity-, tree-, and stability-based baselines.

Abstract

Feature selection is fundamental to robust data-centric AI, but most existing methods optimize predictive performance under a single data distribution. This often selects spurious features that fail under distribution shifts. Motivated by principles from causal invariance, we study feature selection from a stability perspective and introduce Causally-Guided Diffusion for Stable Feature Selection (CGDFS). In CGDFS, we formalized feature selection as approximate posterior inference over feature subsets, whose posterior mass favors low prediction error and low cross-environment variance. Our framework combines three key insights: First, we formulate feature selection as stability-aware posterior sampling. Here, causal invariance serves as a soft inductive bias rather than explicit causal discovery. Second, we train a diffusion model as a learned prior over plausible continuous selection masks, combined with a stability-aware likelihood that rewards invariance across environments. This diffusion prior captures structural dependencies among features and enables scalable exploration of the combinatorially large selection space. Third, we perform guided annealed Langevin sampling that combines the diffusion prior with the stability objective, which yields a tractable, uncertainty-aware posterior inference that avoids discrete optimization and produces robust feature selections. We evaluate CGDFS on open-source real-world datasets exhibiting distribution shifts. Across both classification and regression tasks, CGDFS consistently selects more stable and transferable feature subsets, which leads to improved out-of-distribution performance and greater selection robustness compared to sparsity-based, tree-based, and stability-selection baselines.