Alignment Reduces Expressed but Not Encoded Gender Bias: A Unified Framework and Study

arXiv cs.CL / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a unified evaluation framework that compares gender bias expressed in LLM outputs with gender information encoded in internal representations using identical neutral prompts.
  • Using this protocol, the authors report a consistent relationship between latent (internal) gender information and expressed bias, addressing prior findings of weak or inconsistent correlations.
  • It studies debiasing via supervised fine-tuning for alignment and finds that alignment can reduce expressed bias even though gender-related associations remain in internal representations.
  • The remaining internal gender associations can be reactivated by adversarial prompting, suggesting debiasing may not fully remove gender signals from learned representations.
  • Results on more realistic settings (e.g., story generation) indicate that reductions seen on structured benchmarks may not generalize to real usage scenarios.

Abstract

During training, Large Language Models (LLMs) learn social regularities that can lead to gender bias in downstream applications. Most mitigation efforts focus on reducing bias in generated outputs, typically evaluated on structured benchmarks, which raises two concerns: output-level evaluation does not reveal whether alignment modifies the model's underlying representations, and structured benchmarks may not reflect realistic usage scenarios. We propose a unified framework to jointly analyze intrinsic and extrinsic gender bias in LLMs using identical neutral prompts, enabling direct comparison between gender-related information encoded in internal representations and bias expressed in generated outputs. Contrary to prior work reporting weak or inconsistent correlations, we find a consistent association between latent gender information and expressed bias when measured under the unified protocol. We further examine the effect of alignment through supervised fine-tuning aimed at reducing gender bias. Our results suggest that while the latter indeed reduces expressed bias, measurable gender-related associations are still present in internal representations, and can be reactivated under adversarial prompting. Finally, we consider two realistic settings and show that debiasing effects observed on structured benchmarks do not necessarily generalize, e.g., to the case of story generation.