An Investigation of Linguistic Biases in LLM-Based Recommendations
arXiv cs.CL / 4/29/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The study examines how linguistic dialects—Southern American English, Indian English, and code-switched Hindi-English—affect LLM-based restaurant and product recommendations in a cold-start setting.
- It uses the Yelp Open dataset and a Walmart reviews dataset, prompting multiple LLMs to choose the top-20 items from cuisine- and category-balanced name lists.
- The researchers vary prompt sampling across 20 seeds, aggregate recommendation counts, and apply mixed-effects regression and likelihood-ratio tests to quantify dialect- and model-size effects.
- Results indicate that dialect influences the kinds of restaurants recommended, with Mistral-small-3.1 and Llama-3.1 family models showing greater sensitivity to Indian English and code-switched prompts.
- For product recommendations, Llama-3.1-70B is highly sensitive to code-switched prompts in most categories, and category shifts (e.g., more beauty/home recommendations) differ depending on whether prompts use Indian English or code-switching.
Related Articles

How I Use AI Agents to Maintain a Living Knowledge Base for My Team
Dev.to
IK_LLAMA now supports Qwen3.5 MTP Support :O
Reddit r/LocalLLaMA
OpenAI models, Codex, and Managed Agents come to AWS
Dev.to

Indian Developers: How to Build AI Side Income with $0 Capital in 2026
Dev.to

Vertical SaaS for Startups 2026: Building a Niche AI-First Product
Dev.to