A Job I Like or a Job I Can Get: Designing Job Recommender Systems Using Field Experiments

arXiv stat.ML / 3/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that job recommender systems used in online platforms are often optimized for predictive outcomes (e.g., clicks or applications) instead of job seekers’ welfare.
  • It proposes a job-search model where a vacancy’s value depends on both worker utility and the probability an application succeeds, implying welfare-optimal rankings via an expected-surplus index.
  • The study shows that rankings based only on utility, hiring probabilities, or observed application behavior are generally suboptimal due to an inversion problem between behavior signals and welfare.
  • Using two randomized field experiments with France’s public employment service, the authors test these theoretical predictions, estimate the model, and measure welfare-relevant metrics.
  • The welfare-informed recommender algorithm substantially outperforms existing approaches and comes close to the welfare-optimal benchmark, demonstrating the practical value of combining predictive tools with experimental evaluation.

Abstract

Recommendation systems (RSs) are increasingly used to guide job seekers on online platforms, yet the algorithms currently deployed are typically optimized for predictive objectives such as clicks, applications, or hires, rather than job seekers' welfare. We develop a job-search model with an application stage in which the value of a vacancy depends on two dimensions: the utility it delivers to the worker and the probability that an application succeeds. The model implies that welfare-optimal RSs rank vacancies by an expected-surplus index combining both, and shows why rankings based solely on utility, hiring probabilities, or observed application behavior are generically suboptimal, an instance of the inversion problem between behavior and welfare. We test these predictions and quantify their practical importance through two randomized field experiments conducted with the French public employment service. The first experiment, comparing existing algorithms and their combinations, provides behavioral evidence that both dimensions shape application decisions. Guided by the model and these results, the second experiment extends the comparison to an RS designed to approximate the welfare-optimal ranking. The experiments generate exogenous variation in the vacancies shown to job seekers, allowing us to estimate the model, validate its behavioral predictions, and construct a welfare metric. Algorithms informed by the model-implied optimal ranking substantially outperform existing approaches and perform close to the welfare-optimal benchmark. Our results show that embedding predictive tools within a simple job-search framework and combining it with experimental evidence yields recommendation rules with substantial welfare gains in practice.