AI Navigate

Procedural Fairness via Group Counterfactual Explanation

arXiv cs.AI / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • GCIG is an in-processing regularization framework that enforces explanation invariance across protected groups by using Group Counterfactual Integrated Gradients.
  • For each input, GCIG computes explanations relative to multiple Group Conditional baselines and penalizes cross-group variation in these attributions during training, formalizing procedural fairness as Group Counterfactual explanation stability.
  • Empirical comparisons against six state-of-the-art methods show GCIG substantially reduces cross-group explanation disparity while preserving predictive performance and favorable accuracy–fairness trade-offs.
  • The authors argue that aligning model reasoning across groups offers a principled, practical path to advancing fairness beyond outcome parity, complementing existing prediction-focused fairness objectives.

Abstract

Fairness in machine learning research has largely focused on outcome-oriented fairness criteria such as Equalized Odds, while comparatively less attention has been given to procedural-oriented fairness, which addresses how a model arrives at its predictions. Neglecting procedural fairness means it is possible for a model to generate different explanations for different protected groups, thereby eroding trust. In this work, we introduce Group Counterfactual Integrated Gradients (GCIG), an in-processing regularization framework that enforces explanation invariance across groups, conditioned on the true label. For each input, GCIG computes explanations relative to multiple Group Conditional baselines and penalizes cross-group variation in these attributions during training. GCIG formalizes procedural fairness as Group Counterfactual explanation stability and complements existing fairness objectives that constrain predictions alone. We compared GCIG empirically against six state-of-the-art methods, and the results show that GCIG substantially reduces cross-group explanation disparity while maintaining competitive predictive performance and accuracy-fairness trade-offs. Our results also show that aligning model reasoning across groups offers a principled and practical avenue for advancing fairness beyond outcome parity.