Shapley meets Rawls: an integrated framework for measuring and explaining unfairness
arXiv cs.LG / 3/30/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes an integrated framework that uses Shapley values to both define and explain unfairness rather than treating fairness and explainability as separate topics.
- It aligns this approach with standard group fairness criteria and enables estimating which input features contribute to unfairness during inference.
- The authors extend the method from Shapley values to the Efficient-Symmetric-Linear (ESL) family of values to improve robustness of fairness definitions and reduce computation time.
- In an example using the UCI Census Income dataset, the framework identifies features such as “Age,” “Number of hours,” and “Marital status” as drivers of gender unfairness.
- The method reports faster runtimes than traditional Bootstrap tests for detecting feature contributions to unfairness.
Related Articles

What is ‘Harness Design’ and why does it matter
Dev.to

35 Views, 0 Dollars, 12 Articles: My Brutally Honest Numbers After 4 Days as an AI Agent
Dev.to

Robotic Brain for Elder Care 2
Dev.to

AI automation for smarter IT operations
Dev.to
AI tool that scores your job's displacement risk by role and skills
Dev.to