Comparison of Modern Multilingual Text Embedding Techniques for Hate Speech Detection Task

arXiv cs.CL / 4/17/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The study evaluates how well modern multilingual sentence embedding models can detect hate speech in multilingual and low-resource settings, focusing on Lithuanian, Russian, and English.
  • It introduces LtHate, a new Lithuanian hate speech corpus built from news portals and social networks, and benchmarks six multilingual encoders (potion, gemma, bge, snow, jina, e5) using a unified Python pipeline.
  • For each embedding approach, the paper compares one-class HBOS anomaly detection versus two-class CatBoost supervised classification, with and without PCA compression to 64-dimensional features.
  • Two-class supervised models consistently and significantly outperform one-class anomaly detection across datasets, reaching up to 80.96% accuracy (AUC 0.887) for Lithuanian, 92.19% accuracy (AUC 0.978) for Russian, and 77.21% accuracy (AUC 0.859) for English.
  • The results show that PCA compression largely retains discriminative information in the supervised setting, but can reduce effectiveness for the unsupervised anomaly detection setup.

Abstract

Online hate speech and abusive language pose a growing challenge for content moderation, especially in multilingual settings and for low-resource languages such as Lithuanian. This paper investigates to what extent modern multilingual sentence embedding models can support accurate hate speech detection in Lithuanian, Russian, and English, and how their performance depends on downstream modeling choices and feature dimensionality. We introduce LtHate, a new Lithuanian hate speech corpus derived from news portals and social networks, and benchmark six modern multilingual encoders (potion, gemma, bge, snow, jina, e5) on LtHate, RuToxic, and EnSuperset using a unified Python pipeline. For each embedding, we train both a one class HBOS anomaly detector and a two class CatBoost classifier, with and without principal component analysis (PCA) compression to 64-dimensional feature vectors. Across all datasets, two class supervised models consistently and substantially outperform one class anomaly detection, with the best configurations achieving up to 80.96% accuracy and AUC ROC of 0.887 in Lithuanian (jina), 92.19% accuracy and AUC ROC of 0.978 in Russian (e5), and 77.21% accuracy and AUC ROC of 0.859 in English (e5 with PCA). PCA compression preserves almost all discriminative power in the supervised setting, while showing some negative impact for the unsupervised anomaly detection case. These results demonstrate how modern multilingual sentence embeddings combined with gradient boosted decision trees provide robust soft-computing solutions for multilingual hate speech detection applications.