Investigating Data Interventions for Subgroup Fairness: An ICU Case Study

arXiv cs.LG / 4/7/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how “data fixing” interventions can fail or backfire when training data comes from multiple sources with distribution shifts, leading to volatile subgroup fairness outcomes.
  • Using an ICU/healthcare setting with EHR-derived datasets (eICU Collaborative Research Database and MIMIC-IV), the authors find that adding data can both improve and worsen both subgroup fairness and overall performance.
  • The research shows that many intuitive data-selection strategies are unreliable for subgroup outcomes, especially when added data introduces new biases or shifts.
  • It compares data-centric addition approaches with model-based post-hoc calibration and concludes that combining both is important for improving subgroup performance.
  • The findings challenge the common belief that “better data alone” is sufficient to address fairness problems in machine-learning decision systems.

Abstract

In high-stakes settings where machine learning models are used to automate decision-making about individuals, the presence of algorithmic bias can exacerbate systemic harm to certain subgroups of people. These biases often stem from the underlying training data. In practice, interventions to "fix the data" depend on the actual additional data sources available -- where many are less than ideal. In these cases, the effects of data scaling on subgroup performance become volatile, as the improvements from increased sample size are counteracted by the introduction of distribution shifts in the training set. In this paper, we investigate the limitations of combining data sources to improve subgroup performance within the context of healthcare. Clinical models are commonly trained on datasets comprised of patient electronic health record (EHR) data from different hospitals or admission departments. Across two such datasets, the eICU Collaborative Research Database and the MIMIC-IV dataset, we find that data addition can both help and hurt model fairness and performance, and many intuitive strategies for data selection are unreliable. We compare model-based post-hoc calibration and data-centric addition strategies to find that the combination of both is important to improve subgroup performance. Our work questions the traditional dogma of "better data" for overcoming fairness challenges by comparing and combining data- and model-based approaches.