Sima AIunty: Caste Audit in LLM-Driven Matchmaking
arXiv cs.CL / 4/1/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper presents a controlled audit of how large language models evaluate caste in LLM-mediated matchmaking, using real-world matrimonial profiles with varied caste and income levels.
- Five LLM families (GPT, Gemini, Llama, Qwen, and BharatGPT) were prompted to judge social acceptance, marital stability, and cultural compatibility across caste groups (Brahmin, Kshatriya, Vaishya, Shudra, Dalit).
- Results show consistent hierarchical bias across models, with same-caste matches rated most favorably and inter-caste matches ranked according to traditional caste hierarchy.
- The study reports average favorability differences of up to 25% (on a 10-point scale) between same-caste and inter-caste evaluations, indicating that LLMs can reproduce entrenched social stratification.
- The authors argue for culturally grounded evaluation and intervention strategies to prevent AI systems from reinforcing exclusion in socially sensitive decision domains like matchmaking.
Related Articles

Black Hat Asia
AI Business

Show HN: 1-Bit Bonsai, the First Commercially Viable 1-Bit LLMs
Dev.to

I Built an AI Agent That Can Write Its Own Tools When It Gets Stuck
Dev.to

How to Create AI Videos in 20 Minutes (3 Free Tools, Zero Experience)
Dev.to

Agent Self-Discovery: How AI Agents Find Their Own Wallets
Dev.to