Debiasing Large Language Models toward Social Factors in Online Behavior Analytics through Prompt Knowledge Tuning

arXiv cs.CL / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines how large language models may implicitly perform social-causal attribution (dispositional vs situational) in online behavior analytics, potentially leading to biased reasoning in social contexts.
  • It proposes a “prompt knowledge tuning” approach that enriches prompts with social-attribution knowledge derived from a message’s goal (for dispositional causality) and context (for situational causality).
  • Experiments on zero-shot intent detection and theme detection in disaster-related social media show improved performance alongside reduced social-attribution bias.
  • The method is evaluated under disaster-type variability and multilingual social media settings, demonstrating robustness across these conditions.
  • The study reports that three open-source LLMs (Llama3, Mistral, and Gemma) exhibit notable bias toward social attribution and that the proposed prompt aids effectively mitigate it.

Abstract

Attribution theory explains how individuals interpret and attribute others' behavior in a social context by employing personal (dispositional) and impersonal (situational) causality. Large Language Models (LLMs), trained on human-generated corpora, may implicitly mimic this social attribution process in social contexts. However, the extent to which LLMs utilize these causal attributions in their reasoning remains underexplored. Although using reasoning paradigms, such as Chain-of-Thought (CoT), has shown promising results in various tasks, ignoring social attribution in reasoning could lead to biased responses by LLMs in social contexts. In this study, we investigate the impact of incorporating a user's goal as knowledge to infer dispositional causality and message context to infer situational causality on LLM performance. To this end, we introduce a scalable method to mitigate such biases by enriching the instruction prompts for LLMs with two prompt aids using social-attribution knowledge, based on the context and goal of a social media message. This method improves the model performance while reducing the social-attribution bias of the LLM in the reasoning on zero-shot classification tasks for behavior analytics applications. We empirically show the benefits of our method across two tasks-intent detection and theme detection on social media in the disaster domain-when considering the variability of disaster types and multiple languages of social media. Our experiments highlight the biases of three open-source LLMs: Llama3, Mistral, and Gemma, toward social attribution, and show the effectiveness of our mitigation strategies.

Debiasing Large Language Models toward Social Factors in Online Behavior Analytics through Prompt Knowledge Tuning | AI Navigate