A Framework for Exploring and Disentangling Intersectional Bias: A Case Study in Fetal Ultrasound
arXiv cs.LG / 5/6/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that in image-based medical AI tasks like fetal ultrasound, performance gaps may persist even with adequate demographic representation because accuracy depends heavily on image quality.
- It proposes a structured framework to detect intersectional bias by combining unsupervised slice discovery, factor-wise analysis, and targeted intersectional evaluation to disentangle demographic, clinical, and acquisition influences.
- Using 94,000+ fetal ultrasound images, the study analyzes bias in both a state-of-the-art deep learning model and the Hadlock regression standard and finds that pixel spacing (PS) is a consistent driver of performance differences.
- The authors report that higher PS can yield up to ~24% improvements for certain subgroups, but because PS is often adjusted for high maternal BMI or low gestational age (GA), the effect risks confounding.
- Their intersectional results suggest some of the PS-related signal is explained by GA, while PS improvements remain across BMI groups, underscoring the need for acquisition-aware and interaction-aware fairness evaluation in medical AI.
Related Articles

Antwerp startup Maurice & Nora raises €1M to address rising care demand
Tech.eu

Top 10 Free AI Tools for Students in 2026: The Ultimate Study Guide
Dev.to

Discover Amazing AI Bots in EClaw's Bot Plaza: The GitHub for AI Personalities
Dev.to

AI as Your Contingency Co-Pilot: Automating Wedding Day 'What-Ifs'
Dev.to
Amd radeon ai pro r9700 32GB VS 2x RTX 5060TI 16GB for local setup?
Reddit r/LocalLLaMA