SectEval: Evaluating the Latent Sectarian Preferences of Large Language Models
arXiv cs.CL / 3/16/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces SectEval, a benchmark with 88 questions in English and Hindi to assess how LLMs handle Sunni and Shia biases.
- It evaluates 15 top LLMs, including proprietary and open-weight models, and finds language-dependent inconsistencies in their bias.
- In English, models like DeepSeek-v3 and GPT-4o favored Shia answers, while in Hindi they shifted to Sunni, showing language-driven bias reversals.
- The study also shows location effects, with Claude-3.5 tailoring answers to Iran or Saudi Arabia, whereas smaller Hindi models tended to stick to Sunni regardless of location; the dataset is available on GitHub.
Related Articles

Astral to Join OpenAI
Dev.to

PearlOS. We gave swarm intelligence a local desktop environment and code control to self-evolve. Has been pretty incredible to see so far. Open source and free if you want your own.
Reddit r/LocalLLaMA

Why Data is Important for LLM
Dev.to

The Inference Market Is Consolidating. Agent Payments Are Still Nobody's Problem.
Dev.to

YouTube's Deepfake Shield for Politicians Changes Evidence Forever
Dev.to