AI Navigate

Fair Learning for Bias Mitigation and Quality Optimization in Paper Recommendation

arXiv cs.AI / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Fair-PaperRec is a multi-layer perceptron (MLP)–based model designed to reduce demographic biases in post-review paper acceptance decisions while preserving high-quality criteria.
  • It introduces intersectional fairness constraints (e.g., race, country) and a customized fairness loss to penalize disparities instead of relying on heuristic adjustments.
  • Evaluations on conference data from SIGCHI, DIS, and IUI show a 42.03% increase in participation by underrepresented groups alongside a 3.16% gain in overall utility, indicating diversity can be promoted without compromising rigor.
  • The approach aims to enable equity-focused peer review solutions and could influence future research on bias mitigation in scholarly publishing.

Abstract

Despite frequent double-blind review, demographic biases of authors still disadvantage the underrepresented groups. We present Fair-PaperRec, a MultiLayer Perceptron (MLP)-based model that addresses demographic disparities in post-review paper acceptance decisions while maintaining high-quality requirements. Our methodology penalizes demographic disparities while preserving quality through intersectional criteria (e.g., race, country) and a customized fairness loss, in contrast to heuristic approaches. Evaluations using conference data from ACM Special Interest Group on Computer-Human Interaction (SIGCHI), Designing Interactive Systems (DIS), and Intelligent User Interfaces (IUI) indicate a 42.03% increase in underrepresented group participation and a 3.16% improvement in overall utility, indicating that diversity promotion does not compromise academic rigor and supports equity-focused peer review solutions.