Sima AIunty: Caste Audit in LLM-Driven Matchmaking

arXiv cs.CL / 4/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents a controlled audit of how large language models evaluate caste in LLM-mediated matchmaking, using real-world matrimonial profiles with varied caste and income levels.
  • Five LLM families (GPT, Gemini, Llama, Qwen, and BharatGPT) were prompted to judge social acceptance, marital stability, and cultural compatibility across caste groups (Brahmin, Kshatriya, Vaishya, Shudra, Dalit).
  • Results show consistent hierarchical bias across models, with same-caste matches rated most favorably and inter-caste matches ranked according to traditional caste hierarchy.
  • The study reports average favorability differences of up to 25% (on a 10-point scale) between same-caste and inter-caste evaluations, indicating that LLMs can reproduce entrenched social stratification.
  • The authors argue for culturally grounded evaluation and intervention strategies to prevent AI systems from reinforcing exclusion in socially sensitive decision domains like matchmaking.

Abstract

Social and personal decisions in relational domains such as matchmaking are deeply entwined with cultural norms and historical hierarchies, and can potentially be shaped by algorithmic and AI-mediated assessments of compatibility, acceptance, and stability. In South Asian contexts, caste remains a central aspect of marital decision-making, yet little is known about how contemporary large language models (LLMs) reproduce or disrupt caste-based stratification in such settings. In this work, we conduct a controlled audit of caste bias in LLM-mediated matchmaking evaluations using real-world matrimonial profiles. We vary caste identity across Brahmin, Kshatriya, Vaishya, Shudra, and Dalit, and income across five buckets, and evaluate five LLM families (GPT, Gemini, Llama, Qwen, and BharatGPT). Models are prompted to assess profiles along dimensions of social acceptance, marital stability, and cultural compatibility. Our analysis reveals consistent hierarchical patterns across models: same-caste matches are rated most favorably, with average ratings up to 25% higher (on a 10-point scale) than inter-caste matches, which are further ordered according to traditional caste hierarchy. These findings highlight how existing caste hierarchies are reproduced in LLM decision-making and underscore the need for culturally grounded evaluation and intervention strategies in AI systems deployed in socially sensitive domains, where such systems risk reinforcing historical forms of exclusion.