CulturALL: Benchmarking Multilingual and Multicultural Competence of LLMs on Grounded Tasks

arXiv cs.CL / 4/22/2026

📰 NewsModels & Research

Key Points

  • The new CulturALL benchmark evaluates LLMs’ multilingual and multicultural competence specifically on grounded, real-world context reasoning tasks rather than generic language understanding or surface-level cultural trivia.
  • CulturALL is constructed using a human–AI collaborative pipeline where expert annotators control difficulty and factual accuracy, while LLMs reduce the manual annotation workload.
  • The benchmark spans diverse scenario sources, covering 14 languages across 51 regions, with 2,610 samples distributed over 16 topics to broaden grounded-task coverage.
  • In reported experiments, even the best-performing model reaches 44.48% accuracy, indicating significant performance gaps and opportunities for further research and model improvement.

Abstract

Large language models (LLMs) are now deployed worldwide, inspiring a surge of benchmarks that measure their multilingual and multicultural abilities. However, these benchmarks prioritize generic language understanding or superficial cultural trivia, leaving the evaluation of grounded tasks -- where models must reason within real-world, context-rich scenarios -- largely unaddressed. To fill this gap, we present CulturALL, a comprehensive and challenging benchmark to assess LLMs' multilingual and multicultural competence on grounded tasks. CulturALL is built via a human--AI collaborative framework: expert annotators ensure appropriate difficulty and factual accuracy, while LLMs lighten the manual workload. By incorporating diverse sources, CulturALL ensures comprehensive scenario coverage. Each item is carefully designed to present a high level of difficulty, making CulturALL challenging. CulturALL contains 2,610 samples in 14 languages from 51 regions, distributed across 16 topics to capture the full breadth of grounded tasks. Experiments show that the best LLM achieves 44.48% accuracy on CulturALL, underscoring substantial room for improvement.