BIASEDTALES-ML: A Multilingual Dataset for Analyzing Narrative Attribute Distributions in LLM-Generated Stories

arXiv cs.CL / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces BiasedTales-ML, a multilingual, large-scale parallel dataset of about 350,000 LLM-generated children’s stories across eight typologically and culturally diverse languages.
  • It presents a structured generator–extractor pipeline and a multi-dimensional distributional analysis framework to compare how narrative attributes vary by language, model, and social conditions.
  • The study finds significant cross-lingual variability in narrative generation patterns, showing that behaviors and distributions seen in English may not hold in other languages, especially low-resource ones.
  • It identifies recurring narrative structural patterns (e.g., character roles, settings, and thematic emphasis) that appear differently depending on linguistic context, underscoring limitations of English-centric evaluations for socially grounded storytelling.
  • The authors release the dataset, code, and an interactive visualization tool to enable further multilingual narrative analysis and evaluation research.

Abstract

Large Language Models (LLMs) are increasingly used to generate narrative content, including children's stories, which play an important role in social and cultural learning. Despite growing interest in AI safety and alignment, most existing evaluations focus primarily on English, leaving the cross-lingual generalization of aligned behavior underexplored. In this work, we introduce BiasedTales-ML, a large-scale parallel corpus of approximately 350,000 children's stories generated across eight typologically and culturally diverse languages using a full-permutation prompting design. We propose a structured generator-extractor pipeline and a multi-dimensional distributional analysis framework to examine how narrative attributes vary across languages, models, and social conditions. Our analysis reveals substantial cross-lingual variability in narrative generation patterns, indicating that distributions observed in English do not always exhibit similar characteristics in other languages, particularly in lower-resource settings. At the narrative level, we identify recurring structural patterns involving character roles, settings, and thematic emphasis, which manifest differently across linguistic contexts. These findings highlight the limitations of English-centric evaluation for characterizing socially grounded narrative generation in multilingual settings. We release the dataset, code, and an interactive visualization tool to support future research on multilingual narrative analysis and evaluation.