Redirected, Not Removed: Task-Dependent Stereotyping Reveals the Limits of LLM Alignments

arXiv cs.CL / 4/6/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that LLM bias cannot be judged from single benchmarks because stereotyping depends on the specific task format, with models shifting behavior between explicit decision-making and implicit association tasks.
  • It introduces a hierarchical taxonomy of 9 bias types (including caste, linguistic, and geographic axes) and operationalizes them via 7 evaluation tasks designed to capture both overt and subtle forms of bias.
  • Auditing 7 commercial and open-weight LLMs with ~45K prompts shows three consistent patterns: task-dependent bias, asymmetric “alignment” that blocks negative traits for marginalized groups while still assigning positive traits to privileged groups, and particularly strong stereotyping on under-studied bias axes.
  • The authors conclude that current alignment practices and single-slice audits can mask representational harm by mischaracterizing how bias manifests across different prompt/task contexts.

Abstract

How biased is a language model? The answer depends on how you ask. A model that refuses to choose between castes for a leadership role will, in a fill-in-the-blank task, reliably associate upper castes with purity and lower castes with lack of hygiene. Single-task benchmarks miss this because they capture only one slice of a model's bias profile. We introduce a hierarchical taxonomy covering 9 bias types, including under-studied axes like caste, linguistic, and geographic bias, operationalized through 7 evaluation tasks that span explicit decision-making to implicit association. Auditing 7 commercial and open-weight LLMs with \textasciitilde45K prompts, we find three systematic patterns. First, bias is task-dependent: models counter stereotypes on explicit probes but reproduce them on implicit ones, with Stereotype Score divergences up to 0.43 between task types for the same model and identity groups. Second, safety alignment is asymmetric: models refuse to assign negative traits to marginalized groups, but freely associate positive traits with privileged ones. Third, under-studied bias axes show the strongest stereotyping across all models, suggesting alignment effort tracks benchmark coverage rather than harm severity. These results demonstrate that single-benchmark audits systematically mischaracterize LLM bias and that current alignment practices mask representational harm rather than mitigating it.