Widespread Gender and Pronoun Bias in Moral Judgments Across LLMs
arXiv cs.CL / 3/17/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The study investigates how grammatical person, number, and gender markers influence LLM moral judgments and reveals biases across six model families.
- Using 550 balanced base sentences from ETHICS, the researchers created 14,850 semantically equivalent variants by varying pronouns and demographic markers to measure fairness with Statistical Parity Difference.
- Key findings show sentences in singular form and third person are more often judged as fair, second-person forms are penalized, and gender markers produce the strongest effects, with non-binary subjects favored and male subjects disfavored.
- The authors suggest these biases reflect training distribution and alignment biases and call for targeted fairness interventions in moral LLM deployments.
Related Articles
I Was Wrong About AI Coding Assistants. Here's What Changed My Mind (and What I Built About It).
Dev.to

Interesting loop
Reddit r/LocalLLaMA
Qwen3.5-122B-A10B Uncensored (Aggressive) — GGUF Release + new K_P Quants
Reddit r/LocalLLaMA
A supervisor or "manager" Al agent is the wrong way to control Al
Reddit r/artificial
FeatherOps: Fast fp8 matmul on RDNA3 without native fp8
Reddit r/LocalLLaMA