A Generalised Exponentiated Gradient Approach to Enhance Fairness in Binary and Multi-class Classification Tasks
arXiv stat.ML / 3/24/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper tackles bias mitigation by framing fair learning as a multi-objective problem balancing prediction effectiveness against multiple linear fairness constraints in multi-class classification.
- It introduces a Generalised Exponentiated Gradient (GEG) in-processing algorithm designed to improve fairness for both binary and multi-class settings under several fairness definitions.
- The method is evaluated across seven multi-class and three binary datasets, comparing against six baselines using four effectiveness metrics and three fairness definitions.
- Results indicate substantial fairness gains—reported improvements up to 92%—with an observed accuracy decrease that can be up to 14% relative to baselines.
Related Articles
Regulating Prompt Markets: Securities Law, Intellectual Property, and the Trading of Prompt Assets
Dev.to
Mercor competitor Deccan AI raises $25M, sources experts from India
Dev.to
How We Got Local MCP Servers Working in Claude Cowork (The Missing Guide)
Dev.to
How Should Students Document AI Usage in Academic Work?
Dev.to
They Did Not Accidentally Make Work the Answer to Who You Are
Dev.to