Decomposing Discrimination: Causal Mediation Analysis for AI-Driven Credit Decisions
arXiv cs.LG / 3/31/2026
💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research
Key Points
- The paper argues that standard statistical fairness metrics in AI credit scoring mix two causally distinct pathways: direct discrimination from protected attributes to outcomes, and indirect effects via financial mediators reflecting structural inequality.
- It formalizes discrimination decomposition using Pearl-style natural direct/indirect effects for credit decisions, focusing on identification under treatment-induced confounding where protected attributes affect both mediators and the final decision.
- The authors show interventional direct/indirect effects (IDE/IIE) are identifiable under a weaker Modified Sequential Ignorability assumption, and that IDE/IIE can conservatively bound the otherwise-unidentified natural effects under a monotone indirect treatment response.
- They introduce a doubly-robust augmented inverse probability weighted (AIPW) estimator with cross-fitting, plus E-value sensitivity analysis for residual direct-path confounding.
- Using 89,465 HMDA mortgage applications from New York (2022), the study finds about 77% of a 7.9-point racial denial disparity is mediated through financially relevant features, with the remaining 23% acting as a conservative lower bound on direct discrimination, and provides an open-source CausalFair Python package for deployment.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

Black Hat Asia
AI Business
[D] How does distributed proof of work computing handle the coordination needs of neural network training?
Reddit r/MachineLearning

Claude Code's Entire Source Code Was Just Leaked via npm Source Maps — Here's What's Inside
Dev.to

BYOK is not just a pricing model: why it changes AI product trust
Dev.to

AI Citation Registries and Identity Persistence Across Records
Dev.to