Representational Harms in LLM-Generated Narratives Against Global Majority Nationalities

arXiv cs.CL / 4/27/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The study investigates how widely used LLMs depict national-origin identities when given open-ended narrative prompts, focusing on harms such as stereotypes, erasure, and one-dimensional characterizations.
  • It finds that Global Majority national identities are both underrepresented in power-neutral stories and overrepresented in subordinated portrayals, with subordinated depictions appearing over 50 times more often than dominant ones.
  • The representational harms intensify when US nationality cues (e.g., “American”) are included in the input prompts.
  • The authors report that the observed harms persist even when US cues are replaced with non-US national identities, suggesting they are not merely driven by sycophancy.

Abstract

Large language models (LLMs) are increasingly used for text generation tasks from everyday use to high-stakes enterprise and government applications, including simulated interviews with asylum seekers. While many works highlight the new potential applications of LLMs, there are risks of LLMs encoding and perpetuating harmful biases about non-dominant communities across the globe. To better evaluate and mitigate such harms, more research examining how LLMs portray diverse individuals is needed. In this work, we study how national origin identities are portrayed by widely-adopted LLMs in response to open-ended narrative generation prompts. Our findings demonstrate the presence of persistent representational harms by national origin, including harmful stereotypes, erasure, and one-dimensional portrayals of Global Majority identities. Minoritized national identities are simultaneously underrepresented in power-neutral stories and overrepresented in subordinated character portrayals, which are over fifty times more likely to appear than dominant portrayals. The degree of harm is amplified when US nationality cues (e.g., ``American'') are present in input prompts. Notably, we find that the harms we identify cannot be explained away via sycophancy, as US-centric biases persist even when replacing US nationality cues with non-US national identities in the prompts. Based on our findings, we call for further exploration of cultural harms in LLMs through methodologies that center Global Majority perspectives and challenge the uncritical adoption of US-based LLMs for the classification, surveillance, and misrepresentation of the majority of our planet.