A Job I Like or a Job I Can Get: Designing Job Recommender Systems Using Field Experiments
arXiv stat.ML / 3/24/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that job recommender systems used in online platforms are often optimized for predictive outcomes (e.g., clicks or applications) instead of job seekers’ welfare.
- It proposes a job-search model where a vacancy’s value depends on both worker utility and the probability an application succeeds, implying welfare-optimal rankings via an expected-surplus index.
- The study shows that rankings based only on utility, hiring probabilities, or observed application behavior are generally suboptimal due to an inversion problem between behavior signals and welfare.
- Using two randomized field experiments with France’s public employment service, the authors test these theoretical predictions, estimate the model, and measure welfare-relevant metrics.
- The welfare-informed recommender algorithm substantially outperforms existing approaches and comes close to the welfare-optimal benchmark, demonstrating the practical value of combining predictive tools with experimental evaluation.
Related Articles

Interactive Web Visualization of GPT-2
Reddit r/artificial
Stop Treating AI Interview Fraud Like a Proctoring Problem
Dev.to
[R] Causal self-attention as a probabilistic model over embeddings
Reddit r/MachineLearning
The 5 software development trends that actually matter in 2026 (and what they mean for your startup)
Dev.to
InVideo AI Review: Fast Finished
Dev.to