Homophily-aware Supervised Contrastive Counterfactual Augmented Fair Graph Neural Network

arXiv cs.LG / 4/6/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a fairness-aware graph neural network method that extends the counterfactual augmented fair GNN (CAF) framework to mitigate bias originating from both node features and graph structure.
  • It uses a two-phase training process: first editing the graph to raise homophily with respect to class labels while lowering homophily tied to sensitive attributes.
  • In the second phase, it combines a modified supervised contrastive loss with an environmental loss to jointly optimize for prediction quality and fairness.
  • Experiments on five real-world datasets report improved results over CAF and multiple state-of-the-art graph learning baselines across both accuracy and fairness metrics.
  • The work positions homophily-aware counterfactual augmentation and contrastive/environmental objectives as a practical pathway to more reliable fair GNN training under structural bias.

Abstract

In recent years, Graph Neural Networks (GNNs) have achieved remarkable success in tasks such as node classification, link prediction, and graph representation learning. However, they remain susceptible to biases that can arise not only from node attributes but also from the graph structure itself. Addressing fairness in GNNs has therefore emerged as a critical research challenge. In this work, we propose a novel model for training fairness-aware GNNs by improving the counterfactual augmented fair graph neural network framework (CAF). Specifically, our approach introduces a two-phase training strategy: in the first phase, we edit the graph to increase homophily ratio with respect to class labels while reducing homophily ratio with respect to sensitive attribute labels; in the second phase, we integrate a modified supervised contrastive loss and environmental loss into the optimization process, enabling the model to jointly improve predictive performance and fairness. Experiments on five real-world datasets demonstrate that our model outperforms CAF and several state-of-the-art graph-based learning methods in both classification accuracy and fairness metrics.