AI Navigate

Gender Bias in Generative AI-assisted Recruitment Processes

arXiv cs.AI / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The study evaluates how GPT-5 suggests occupations for candidates based on gender and work experience, focusing on under-35 Italian graduates.
  • It uses 24 simulated candidate profiles balanced by gender, age, experience, and field to probe potential biases in job suggestions.
  • The results show no significant differences in job titles or industries, but reveal gendered linguistic patterns, with women described using emotional traits and men with strategic traits.
  • The findings raise ethical questions about using GenAI in sensitive recruitment processes and call for transparency and fairness in future digital labor markets.
  • The study implies serious implications for how AI tools could reproduce or amplify gender bias in hiring and underscores the need for bias-mitigating strategies in deployment.

Abstract

In recent years, generative artificial intelligence (GenAI) systems have assumed increasingly crucial roles in selection processes, personnel recruitment and analysis of candidates' profiles. However, the employment of large language models (LLMs) risks reproducing, and in some cases amplifying, gender stereotypes and bias already present in the labour market. The objective of this paper is to evaluate and measure this phenomenon, analysing how a state-of-the-art generative model (GPT-5) suggests occupations based on gender and work experience background, focusing on under-35-year-old Italian graduates. The model has been prompted to suggest jobs to 24 simulated candidate profiles, which are balanced in terms of gender, age, experience and professional field. Although no significant differences emerged in job titles and industry, gendered linguistic patterns emerged in the adjectives attributed to female and male candidates, indicating a tendency of the model to associate women with emotional and empathetic traits, while men with strategic and analytical ones. The research raises an ethical question regarding the use of these models in sensitive processes, highlighting the need for transparency and fairness in future digital labour markets.