AI Navigate

Analysis Of Linguistic Stereotypes in Single and Multi-Agent Generative AI Architectures

arXiv cs.AI / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The study replicates dialect-sensitive stereotype generation (SAE vs AAE) in LLM outputs and evaluates mitigation strategies including prompt engineering and multi-agent architectures (generate-critique-revise).
  • Results show stereotype-bearing differences across SAE/AAE outputs across templates, with the strongest effects in adjective and job attributions and substantial model-level disparities.
  • Chain-of-Thought prompting proves effective at mitigating bias for Claude Haiku, while multi-agent architectures provide consistent mitigation across all models tested.
  • The authors advocate fairness evaluation that includes model-specific validation of mitigation strategies and workflow-level controls (e.g., agentic architectures) for high-impact deployments, noting the work is exploratory with potential extensions.

Abstract

Many works in the literature show that LLM outputs exhibit discriminatory behaviour, triggering stereotype-based inferences based on the dialect in which the inputs are written. This bias has been shown to be particularly pronounced when the same inputs are provided to LLMs in Standard American English (SAE) and African-American English (AAE). In this paper, we replicate existing analyses of dialect-sensitive stereotype generation in LLM outputs and investigate the effects of mitigation strategies, including prompt engineering (role-based and Chain-Of-Thought prompting) and multi-agent architectures composed of generate-critique-revise models. We define eight prompt templates to analyse different ways in which dialect bias can manifest, such as suggested names, jobs, and adjectives for SAE or AAE speakers. We use an LLM-as-judge approach to evaluate the bias in the results. Our results show that stereotype-bearing differences emerge between SAE- and AAE-related outputs across all template categories, with the strongest effects observed in adjective and job attribution. Baseline disparities vary substantially by model, with the largest SAE-AAE differential observed in Claude Haiku and the smallest in Phi-4 Mini. Chain-Of-Thought prompting proved to be an effective mitigation strategy for Claude Haiku, whereas the use of a multi-agent architecture ensured consistent mitigation across all the models. These findings suggest that for intersectionality-informed software engineering, fairness evaluation should include model-specific validation of mitigation strategies, and workflow-level controls (e.g., agentic architectures involving critique models) in high-impact LLM deployments. The current results are exploratory in nature and limited in scope, but can lead to extensions and replications by increasing the dataset size and applying the procedure to different languages or dialects.