AI Navigate

Widespread Gender and Pronoun Bias in Moral Judgments Across LLMs

arXiv cs.CL / 3/17/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The study investigates how grammatical person, number, and gender markers influence LLM moral judgments and reveals biases across six model families.
  • Using 550 balanced base sentences from ETHICS, the researchers created 14,850 semantically equivalent variants by varying pronouns and demographic markers to measure fairness with Statistical Parity Difference.
  • Key findings show sentences in singular form and third person are more often judged as fair, second-person forms are penalized, and gender markers produce the strongest effects, with non-binary subjects favored and male subjects disfavored.
  • The authors suggest these biases reflect training distribution and alignment biases and call for targeted fairness interventions in moral LLM deployments.

Abstract

Large language models (LLMs) are increasingly used to assess moral or ethical statements, yet their judgments may reflect social and linguistic biases. This work presents a controlled, sentence-level study of how grammatical person, number, and gender markers influence LLM moral classifications of fairness. Starting from 550 balanced base sentences from the ETHICS dataset, we generated 26 counterfactual variants per item, systematically varying pronouns and demographic markers to yield 14,850 semantically equivalent sentences. We evaluated six model families (Grok, GPT, LLaMA, Gemma, DeepSeek, and Mistral), and measured fairness judgments and inter-group disparities using Statistical Parity Difference (SPD). Results show statistically significant biases: sentences written in the singular form and third person are more often judged as "fair'', while those in the second person are penalized. Gender markers produce the strongest effects, with non-binary subjects consistently favored and male subjects disfavored. We conjecture that these patterns reflect distributional and alignment biases learned during training, emphasizing the need for targeted fairness interventions in moral LLM applications.