When Fairness Metrics Disagree: Evaluating the Reliability of Demographic Fairness Assessment in Machine Learning
arXiv cs.LG / 4/17/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that fairness evaluation in machine learning can be unreliable because different fairness metrics measure different statistical properties and may contradict each other for the same model.
- Using face recognition as a controlled setting, the authors test model performance across multiple demographic group partitions with a variety of commonly used fairness metrics, including error-rate disparity and performance-based measures.
- The study finds that fairness conclusions can change substantially depending on the metric selected, producing conflicting determinations about whether a model is biased.
- To capture and quantify this inconsistency, the authors propose the Fairness Disagreement Index (FDI) and show that fairness disagreement remains high across decision thresholds and model configurations.
- The results suggest that reporting fairness with a single metric is insufficient for trustworthy bias assessment, and multi-metric reporting is needed for reliability in high-stakes domains.


![[2026] OpenTelemetry for LLM Observability — Self-Hosted Setup](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D1200%2Cheight%3D627%2Cfit%3Dcover%2Cgravity%3Dauto%2Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Flu4b6ttuhur71z5gemm0.png&w=3840&q=75)
