Failure of contextual invariance in gender inference with large language models

arXiv cs.CL / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tests a common assumption in LLM evaluation that outputs remain stable under contextually equivalent rephrasings, focusing specifically on gender inference.
  • In a pronoun-selection experiment with minimal, theoretically uninformative discourse context, the researchers find large systematic shifts in model outputs compared with decontextualized settings.
  • Cultural stereotype correlations that appear in simpler setups weaken or vanish with added context, while seemingly irrelevant features (e.g., pronoun gender tied to an unrelated referent) become unexpectedly informative.
  • Using a Contextuality-by-Default analysis, the study reports that in 19–52% of cases the context dependence persists beyond what can be explained by marginal context effects or pronoun repetition.
  • The authors argue these violations of contextual invariance have direct implications for how bias benchmarking is conducted and how LLMs should be deployed in high-stakes environments.

Abstract

Standard evaluation practices assume that large language model (LLM) outputs are stable under contextually equivalent formulations of a task. Here, we test this assumption in the setting of gender inference. Using a controlled pronoun selection task, we introduce minimal, theoretically uninformative discourse context and find that this induces large, systematic shifts in model outputs. Correlations with cultural gender stereotypes, present in decontextualized settings, weaken or disappear once context is introduced, while theoretically irrelevant features, such as the gender of a pronoun for an unrelated referent, become the most informative predictors of model behaviour. A Contextuality-by-Default analysis reveals that, in 19--52\% of cases across models, this dependence persists after accounting for all marginal effects of context on individual outputs and cannot be attributed to simple pronoun repetition. These findings show that LLM outputs violate contextual invariance even under near-identical syntactic formulations, with implications for bias benchmarking and deployment in high-stakes settings.