A multilingual hallucination benchmark: MultiWikiQHalluA

arXiv cs.CL / 5/5/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The new arXiv paper introduces a multilingual hallucination benchmark (MultiWikiQHalluA) to address the gap that most hallucination evaluations are conducted only in English.
  • It defines “faithfulness hallucinations” as fluent, plausible outputs that either contradict the provided input or are internally inconsistent, and builds multilingual synthetic hallucination datasets using MultiWikiQA and the LettuceDetect framework.
  • The authors train token-level hallucination classifiers for 30 European languages and evaluate hallucination rates across selected languages (English, Danish, German, Icelandic).
  • Results show that small model Qwen3-0.6B has markedly high hallucination rates (up to 60% of answers containing at least one hallucination, highest in Icelandic), while larger models generally reduce hallucinations.
  • Hallucination rates are consistently higher in lower-resource languages, indicating that language coverage and resource availability significantly affect model faithfulness.

Abstract

Most hallucination evaluations focus on English, leaving it unclear whether findings transfer to lower-resource languages. We investigate faithfulness hallucinations, defined as model-generated content that is fluent and plausible but diverges from the provided input or is internally inconsistent. Leveraging the multilingual MultiWikiQA dataset, we utilize the LettuceDetect framework to create synthetic hallucination datasets for 306 languages, from which we train token-level hallucination classifiers for 30 European languages. In this work, we present evaluations of model hallucinations on a selection of languages: English, Danish, German, and Icelandic. Using these classifiers, we evaluate the hallucination rates for Qwen3-0.6B, Qwen3-14B, Gemma-3-12B-IT, cogito-v1-preview-qwen-32B, and cogito-v1-preview-llama-70B. Our classifiers reveal notably higher hallucination rates for Qwen3-0.6B (up to 60\% of answers containing at least one hallucination, peaking in Icelandic) and generally lower rates for larger models, with cogito-v1-preview-qwen-32B and cogito-v1-preview-llama-70B performing best on most languages. Hallucination rates are consistently higher for lower-resource languages, particularly Icelandic.