Social Bias in LLM-Generated Code: Benchmark and Mitigation

arXiv cs.AI / 5/4/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper introduces SocialBias-Bench, a new benchmark of 343 real-world coding tasks across seven demographic dimensions to study social bias in LLM-generated code beyond functional correctness.
  • Testing four prominent LLMs shows severe demographic bias, with Code Bias Scores reaching up to 60.58%, indicating current models can systematically encode unfair assumptions in generated code.
  • The study finds that common prompt-level mitigation strategies—such as Chain-of-Thought prompting and assigning a fairness persona—can actually amplify bias rather than reduce it.
  • Multi-agent, structured software process pipelines can reduce bias only when early agents correctly define which attributes the code should and should not consider; adding explicit fairness instructions to all agents worsens outcomes.
  • To address these gaps, the authors propose a Fairness Monitor Agent (FMA) that can plug into existing code-generation pipelines, iteratively detecting and correcting fairness violations without needing an executable test suite, reducing bias by 65.1% and improving correctness from 75.80% to 83.97%.

Abstract

Large Language Models (LLMs) are increasingly deployed to generate code for human-centered applications where demographic fairness is critical. However, existing evaluations focus almost exclusively on functional correctness, leaving social bias in LLM-generated code largely unexamined. Extending our prior work on Solar, we conduct a comprehensive empirical study using SocialBias-Bench, a benchmark of 343 real-world coding tasks spanning seven demographic dimensions. We evaluate four prominent LLMs and find severe bias across all models, with Code Bias Scores reaching up to 60.58%. We further show that standard prompt-level interventions, such as Chain-of-Thought reasoning and fairness persona assignment, inadvertently amplify bias rather than reduce it. We then investigate whether structured multi-agent software process frameworks can improve fairness, finding that structured pipelines reduce bias when early roles correctly scope what the code should and should not consider. However, adding explicit fairness instructions to all agent roles produces worse outcomes than providing none, suggesting that diffused responsibility goes unaddressed. To address these limitations, we propose the Fairness Monitor Agent (FMA), a modular component that plugs into any existing code generation pipeline without modifying it. FMA analyzes the task description to determine which attributes should be considered or restricted, then detects and corrects violations through an iterative review process, without requiring an executable test suite. Evaluated on all 343 tasks, FMA reduces bias by 65.1% compared to a developer agent alone and improves functional correctness from 75.80% to 83.97%, outperforming all other studied approaches.