Failure of contextual invariance in gender inference with large language models
arXiv cs.CL / 3/25/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper tests a common assumption in LLM evaluation that outputs remain stable under contextually equivalent rephrasings, focusing specifically on gender inference.
- In a pronoun-selection experiment with minimal, theoretically uninformative discourse context, the researchers find large systematic shifts in model outputs compared with decontextualized settings.
- Cultural stereotype correlations that appear in simpler setups weaken or vanish with added context, while seemingly irrelevant features (e.g., pronoun gender tied to an unrelated referent) become unexpectedly informative.
- Using a Contextuality-by-Default analysis, the study reports that in 19–52% of cases the context dependence persists beyond what can be explained by marginal context effects or pronoun repetition.
- The authors argue these violations of contextual invariance have direct implications for how bias benchmarking is conducted and how LLMs should be deployed in high-stakes environments.
Related Articles
Santa Augmentcode Intent Ep.6
Dev.to

Your Agent Hired Another Agent. The Output Was Garbage. The Money's Gone.
Dev.to
ClawRouter vs TeamoRouter: one requires a crypto wallet, one doesn't
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Palantir’s billionaire CEO says only two kinds of people will succeed in the AI era: trade workers — ‘or you’re neurodivergent’
Reddit r/artificial