AI Navigate

SEAHateCheck: Functional Tests for Detecting Hate Speech in Low-Resource Languages of Southeast Asia

arXiv cs.CL / 3/18/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • SEAHateCheck introduces a functional testing dataset for hate speech detection in four Southeast Asian languages (Indonesian, Tagalog, Thai, Vietnamese) to address low-resource contexts.
  • It extends the HateCheck and SGHateCheck frameworks by generating culturally relevant test cases with large language models and validation from local experts.
  • The study finds Tagalog yields the lowest model accuracy and slang-based tests are particularly challenging, highlighting gaps in detecting implicit hate and counter-speech.
  • As the first functional test suite for these languages, SEAHateCheck provides a robust benchmark to advance culturally attuned hate-speech moderation tools for research and practice.

Abstract

Hate speech detection relies heavily on linguistic resources, which are primarily available in high-resource languages such as English and Chinese, creating barriers for researchers and platforms developing tools for low-resource languages in Southeast Asia, where diverse socio-linguistic contexts complicate online hate moderation. To address this, we introduce SEAHateCheck, a pioneering dataset tailored to Indonesia, Thailand, the Philippines, and Vietnam, covering Indonesian, Tagalog, Thai, and Vietnamese. Building on HateCheck's functional testing framework and refining SGHateCheck's methods, SEAHateCheck provides culturally relevant test cases, augmented by large language models and validated by local experts for accuracy. Experiments with state-of-the-art and multilingual models revealed limitations in detecting hate speech in specific low-resource languages. In particular, Tagalog test cases showed the lowest model accuracy, likely due to linguistic complexity and limited training data. In contrast, slang-based functional tests proved the hardest, as models struggled with culturally nuanced expressions. The diagnostic insights of SEAHateCheck further exposed model weaknesses in implicit hate detection and models' struggles with counter-speech expression. As the first functional test suite for these Southeast Asian languages, this work equips researchers with a robust benchmark, advancing the development of practical, culturally attuned hate speech detection tools for inclusive online content moderation.