Bias in the Tails: How Name-conditioned Evaluative Framing in Resume Summaries Destabilizes LLM-based Hiring

arXiv cs.CL / 4/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The study examines whether LLM-generated resume candidate summaries introduce bias via name-conditioned evaluative framing, beyond previously documented name-based bias in hiring and salary outputs.
  • Using nearly one million summaries from four models with systematic race-gender name perturbations, the researchers separate each summary into resume-grounded factual content and evaluative language to pinpoint where bias arises.
  • They find factual content is largely stable across perturbations, while evaluative language shows subtle, name-conditioned shifts concentrated in the extremes of the distribution—particularly for open-source models.
  • A hiring simulation shows that the evaluative summary can convert directional harm into a form of symmetric instability that may bypass standard fairness audits, suggesting a pathway by which bias can propagate in LLM-to-LLM automation.

Abstract

Research has documented LLMs' name-based bias in hiring and salary recommendations. In this paper, we instead consider a setting where LLMs generate candidate summaries for downstream assessment. In a large-scale controlled study, we analyze nearly one million resume summaries produced by 4 models under systematic race-gender name perturbations, using synthetic resumes and real-world job postings. By decomposing each summary into resume-grounded factual content and evaluative framing, we find that factual content remains largely stable, while evaluative language exhibits subtle name-conditioned variation concentrated in the extremes of the distribution, especially in open-source models. Our hiring simulation demonstrates how evaluative summary transforms directional harm into symmetric instability that might evade conventional fairness audit, highlighting a potential pathway for LLM-to-LLM automation bias.