Personality Shapes Gender Bias in Persona-Conditioned LLM Narratives Across English and Hindi: An Empirical Investigation

arXiv cs.CL / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The study investigates how persona conditioning in LLM-generated narratives can amplify gender bias through interactions with personality cues across English and Hindi.
  • Researchers generated 23,400 stories using six state-of-the-art LLMs, varying persona gender, occupational roles, and personality traits based on HEXACO and Dark Triad frameworks.
  • Results show that personality traits significantly affect both the strength and direction of gender bias, indicating the bias is not uniform across contexts.
  • Dark Triad traits are linked to more gender-stereotypical depictions than HEXACO socially desirable traits, with effects varying by model and language.
  • The findings imply that real-world persona-driven LLM applications (e.g., education and customer service) may produce uneven representational harms that reinforce stereotypes.

Abstract

Large Language Models (LLMs) are increasingly deployed in persona-driven applications such as education, customer service, and social platforms, where models are prompted to adopt specific personas when interacting with users. While persona conditioning can improve user experience and engagement, it also raises concerns about how personality cues may interact with gender biases and stereotypes. In this work, we present a controlled study of persona-conditioned story generation in English and Hindi, where each story portrays a working professional in India producing context-specific artifacts (e.g., lesson plans, reports, letters) under systematically varied persona gender, occupational role, and personality traits from the HEXACO and Dark Triad frameworks. Across 23,400 generated stories from six state-of-the-art LLMs, we find that personality traits are significantly associated with both the magnitude and direction of gender bias. In particular, Dark Triad personality traits are consistently associated with higher gender-stereotypical representations compared to socially desirable HEXACO traits, though these associations vary across models and languages. Our findings demonstrate that gender bias in LLMs is not static but context-dependent. This suggests that persona-conditioned systems used in real-world applications may introduce uneven representational harms, reinforcing gender stereotypes in generated educational, professional, or social content.