Federated fairness-aware classification under differential privacy
arXiv stat.ML / 3/26/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies how differential privacy and algorithmic fairness interact in federated learning for demographic-disparity-constrained classification.
- It proposes a two-step federated algorithm called FDP-Fair, and a computationally lightweight single-server variant called CDP-Fair.
- Under mild structural assumptions, the authors prove theoretical guarantees covering privacy, fairness, and bounds on excess risk.
- The analysis breaks down the “private fairness-aware excess risk” into four contributors: intrinsic classification cost, private classification cost, non-private fairness cost, and private fairness cost.
- Experiments on synthetic and real datasets support the practicality of the proposed methods and validate the theoretical behavior.
Related Articles
Regulating Prompt Markets: Securities Law, Intellectual Property, and the Trading of Prompt Assets
Dev.to
Mercor competitor Deccan AI raises $25M, sources experts from India
Dev.to
How We Got Local MCP Servers Working in Claude Cowork (The Missing Guide)
Dev.to
How Should Students Document AI Usage in Academic Work?
Dev.to
They Did Not Accidentally Make Work the Answer to Who You Are
Dev.to