JUBAKU: An Adversarial Benchmark for Exposing Culturally Grounded Stereotypes in Japanese LLMs

arXiv cs.CL / 3/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces JUBAKU, a Japanese-culture-specific adversarial benchmark designed to detect culturally grounded stereotypes that are missed by translation-based adaptations of English bias tests.
  • JUBAKU covers ten cultural categories and uses dialogue scenarios hand-crafted by native Japanese annotators to deliberately surface latent social biases in Japanese LLM behavior.
  • In evaluations of nine Japanese LLMs (plus three adapted-from-English models), most systems showed clear bias on JUBAKU, with an average accuracy of 23% versus a 50% random baseline, even though they performed better on other benchmarks.
  • Human annotators achieved 91% accuracy at identifying unbiased responses, supporting the benchmark’s reliability and adversarial effectiveness.

Abstract

Social biases reflected in language are inherently shaped by cultural norms, which vary significantly across regions and lead to diverse manifestations of stereotypes. Existing evaluations of social bias in large language models (LLMs) for non-English contexts, however, often rely on translations of English benchmarks. Such benchmarks fail to reflect local cultural norms, including those found in Japanese. For instance, Western benchmarks may overlook Japan-specific stereotypes related to hierarchical relationships, regional dialects, or traditional gender roles. To address this limitation, we introduce Japanese cUlture adversarial BiAs benchmarK Under handcrafted creation (JUBAKU), a benchmark tailored to Japanese cultural contexts. JUBAKU uses adversarial construction to expose latent biases across ten distinct cultural categories. Unlike existing benchmarks, JUBAKU features dialogue scenarios hand-crafted by native Japanese annotators, specifically designed to trigger and reveal latent social biases in Japanese LLMs. We evaluated nine Japanese LLMs on JUBAKU and three others adapted from English benchmarks. All models clearly exhibited biases on JUBAKU, performing below the random baseline of 50% with an average accuracy of 23% (ranging from 13% to 33%), despite higher accuracy on the other benchmarks. Human annotators achieved 91% accuracy in identifying unbiased responses, confirming JUBAKU's reliability and its adversarial nature to LLMs.