First-See-Then-Design: A Multi-Stakeholder View for Optimal Performance-Fairness Trade-Offs

arXiv cs.LG / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that algorithmic fairness should not be defined solely in predictive space (e.g., demographic parity), because predictions ultimately drive decisions that determine welfare for both decision-makers (DMs) and decision subjects (DSs) across groups.
  • It proposes a multi-stakeholder framework based on welfare economics and distributive justice, defining fairness through a social planner’s utility that reflects inequality in DS utilities under justice principles such as Egalitarian and Rawlsian.
  • Fair decision-making is formulated as a post-hoc multi-objective optimization over decision policies, mapping achievable trade-offs in a two-dimensional utility space (DM utility vs. social planner utility) and comparing policy classes like deterministic vs. stochastic and shared vs. group-specific.
  • The authors derive conditions under which stochastic policies outperform deterministic ones and show empirically that simple stochastic policies can improve the performance–fairness trade-off by exploiting outcome uncertainty.
  • Overall, the work advocates a shift from prediction-centric fairness metrics to a transparent, justice-based, multi-stakeholder design process for decision policies.

Abstract

Fairness in algorithmic decision-making is often defined in the predictive space, where predictive performance - used as a proxy for decision-maker (DM) utility - is traded off against prediction-based fairness notions, such as demographic parity or equality of opportunity. This perspective, however, ignores how predictions translate into decisions and ultimately into utilities and welfare for both DM and decision subjects (DS), as well as their allocation across social-salient groups. In this paper, we propose a multi-stakeholder framework for fair algorithmic decision-making grounded in welfare economics and distributive justice, explicitly modeling the utilities of both the DM and DS, and defining fairness via a social planner's utility that captures inequalities in DS utilities across groups under different justice-based fairness notions (e.g., Egalitarian, Rawlsian). We formulate fair decision-making as a post-hoc multi-objective optimization problem, characterizing the achievable performance-fairness trade-offs in the two-dimensional utility space of DM utility and the social planner's utility, under different decision policy classes (deterministic vs. stochastic, shared vs. group-specific). Using the proposed framework, we then identify conditions (in terms of the stakeholders' utilities) under which stochastic policies are more optimal than deterministic ones, and empirically demonstrate that simple stochastic policies can yield superior performance-fairness trade-offs by leveraging outcome uncertainty. Overall, we advocate a shift from prediction-centric fairness to a transparent, justice-based, multi-stakeholder approach that supports the collaborative design of decision-making policies.