IslamicMMLU: A Benchmark for Evaluating LLMs on Islamic Knowledge
arXiv cs.CL / 3/26/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces IslamicMMLU, a new benchmark with 10,013 multiple-choice questions to evaluate LLMs on Islamic knowledge across Quran, Hadith, and Fiqh.
- The benchmark is organized into three tracks with multiple question types per track, enabling assessment of different reasoning and knowledge-handling capabilities.
- Initial evaluation of 26 LLMs shows large performance variance across models, with overall averaged accuracy ranging from 39.8% to 93.8% and the Quran track exhibiting the widest spread.
- A Fiqh component includes a new madhab (school of jurisprudence) bias detection task to measure differing model preferences across schools of thought.
- The authors release the evaluation code and a public leaderboard, including findings that Arabic-specific models are inconsistent and generally underperform frontier models.
Related Articles
Speaking of VoxtralResearchVoxtral TTS: A frontier, open-weights text-to-speech model that’s fast, instantly adaptable, and produces lifelike speech for voice agents.
Mistral AI Blog
Anyone who has any common sense knows that AI agents in marketing just don’t exist.
Dev.to
How to Use MiMo V2 API for Free in 2026: Complete Guide
Dev.to
The Agent Memory Problem Nobody Solves: A Practical Architecture for Persistent Context
Dev.to
From Chaos to Compliance: AI Automation for the Mobile Kitchen
Dev.to