A Generalised Exponentiated Gradient Approach to Enhance Fairness in Binary and Multi-class Classification Tasks

arXiv stat.ML / 3/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles bias mitigation by framing fair learning as a multi-objective problem balancing prediction effectiveness against multiple linear fairness constraints in multi-class classification.
  • It introduces a Generalised Exponentiated Gradient (GEG) in-processing algorithm designed to improve fairness for both binary and multi-class settings under several fairness definitions.
  • The method is evaluated across seven multi-class and three binary datasets, comparing against six baselines using four effectiveness metrics and three fairness definitions.
  • Results indicate substantial fairness gains—reported improvements up to 92%—with an observed accuracy decrease that can be up to 14% relative to baselines.

Abstract

The widespread use of AI and ML models in sensitive areas raises significant concerns about fairness. While the research community has introduced various methods for bias mitigation in binary classification tasks, the issue remains under-explored in multi-class classification settings. To address this limitation, in this paper, we first formulate the problem of fair learning in multi-class classification as a multi-objective problem between effectiveness (i.e., prediction correctness) and multiple linear fairness constraints. Next, we propose a Generalised Exponentiated Gradient (GEG) algorithm to solve this task. GEG is an in-processing algorithm that enhances fairness in binary and multi-class classification settings under multiple fairness definitions. We conduct an extensive empirical evaluation of GEG against six baselines across seven multi-class and three binary datasets, using four widely adopted effectiveness metrics and three fairness definitions. GEG overcomes existing baselines, with fairness improvements up to 92% and a decrease in accuracy up to 14%.