Debiasing Large Language Models toward Social Factors in Online Behavior Analytics through Prompt Knowledge Tuning
arXiv cs.CL / 3/31/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper examines how large language models may implicitly perform social-causal attribution (dispositional vs situational) in online behavior analytics, potentially leading to biased reasoning in social contexts.
- It proposes a “prompt knowledge tuning” approach that enriches prompts with social-attribution knowledge derived from a message’s goal (for dispositional causality) and context (for situational causality).
- Experiments on zero-shot intent detection and theme detection in disaster-related social media show improved performance alongside reduced social-attribution bias.
- The method is evaluated under disaster-type variability and multilingual social media settings, demonstrating robustness across these conditions.
- The study reports that three open-source LLMs (Llama3, Mistral, and Gemma) exhibit notable bias toward social attribution and that the proposed prompt aids effectively mitigate it.
Related Articles

Black Hat Asia
AI Business
[D] How does distributed proof of work computing handle the coordination needs of neural network training?
Reddit r/MachineLearning

Claude Code's Entire Source Code Was Just Leaked via npm Source Maps — Here's What's Inside
Dev.to

BYOK is not just a pricing model: why it changes AI product trust
Dev.to

AI Citation Registries and Identity Persistence Across Records
Dev.to