AI Navigate

A Model Ensemble-Based Post-Processing Framework for Fairness-Aware Prediction

arXiv cs.LG / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a post-processing framework based on model ensembling to enable fairness-aware prediction across tasks.
  • The framework is model-internals agnostic, allowing use with a wide range of models, architectures, and fairness definitions.
  • The authors validate the approach with experiments in classification, regression, and survival analysis, showing improved fairness with minimal impact on predictive accuracy.
  • The results indicate broad applicability for fairness-oriented ML in practice without requiring changes to underlying training procedures.

Abstract

Striking an optimal balance between predictive performance and fairness continues to be a fundamental challenge in machine learning. In this work, we propose a post-processing framework that facilitates fairness-aware prediction by leveraging model ensembling. Designed to operate independently of any specific model internals, our approach is widely applicable across various learning tasks, model architectures, and fairness definitions. Through extensive experiments spanning classification, regression, and survival analysis, we demonstrate that the framework effectively enhances fairness while maintaining, or only minimally affecting, predictive accuracy.