Fairness Constraints in High-Dimensional Generalized Linear Models

arXiv stat.ML / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper highlights that machine learning models can inherit bias from historical data, creating fairness and accountability challenges.
  • It addresses a common limitation of existing fairness methods, which often require access to sensitive attributes that may be restricted by privacy or law.
  • The proposed framework infers sensitive attributes from auxiliary features and then incorporates fairness constraints directly into the training process.
  • Experiments reported in the study show that this approach can reduce bias while largely maintaining predictive accuracy.
  • Overall, the work provides a practical fairness-aware learning method aimed at improving equity in algorithmic decision-making.

Abstract

Machine learning models often inherit biases from historical data, raising critical concerns about fairness and accountability. Conventional fairness interventions typically require access to sensitive attributes like gender or race, but privacy and legal restrictions frequently limit their use. To address this challenge, we propose a framework that infers sensitive attributes from auxiliary features and integrates fairness constraints into model training. Our approach mitigates bias while preserving predictive accuracy, offering a practical solution for fairness-aware learning. Empirical evaluations validate its effectiveness, contributing to the advancement of more equitable algorithmic decision-making.