Bias in the Tails: How Name-conditioned Evaluative Framing in Resume Summaries Destabilizes LLM-based Hiring
arXiv cs.CL / 4/23/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The study examines whether LLM-generated resume candidate summaries introduce bias via name-conditioned evaluative framing, beyond previously documented name-based bias in hiring and salary outputs.
- Using nearly one million summaries from four models with systematic race-gender name perturbations, the researchers separate each summary into resume-grounded factual content and evaluative language to pinpoint where bias arises.
- They find factual content is largely stable across perturbations, while evaluative language shows subtle, name-conditioned shifts concentrated in the extremes of the distribution—particularly for open-source models.
- A hiring simulation shows that the evaluative summary can convert directional harm into a form of symmetric instability that may bypass standard fairness audits, suggesting a pathway by which bias can propagate in LLM-to-LLM automation.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to