Auditing Support Strategies in LLMs through Grounded Multi-Turn Social Simulation
arXiv cs.CL / 4/21/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that current evaluations of “social support” LLMs often use single-turn prompts, even though real users reveal their situation gradually over multiple turns.
- It proposes a multi-turn social simulation framework that reveals ordered fragments of Reddit users’ support-seeking narratives turn by turn, coding each response using the Social Support Behavior Code (SSBC) instead of a single quality score.
- Using linear probes on hidden representations (without changing the generation context), the study tests whether the model’s support choices track the model-internal estimate of user distress.
- Experiments on Llama-3.1-8B and OLMo-3-7B over 6,200+ turns show systematic behavior shifts with estimated distress: teaching strategies decrease as distress increases, while affective/esteem-oriented strategies show suggestive but model-specific increases.
- The authors find that community context also independently affects support behavior by reflecting topic and discourse norms rather than demographic categories, motivating multi-turn auditing for socially sensitive LLM applications.
Related Articles

Rethinking Coding Education for the AI Era
Dev.to

We Shipped an MVP With Vibe-Coding. Here's What Nobody Tells You About the Aftermath
Dev.to

Agent Package Manager (APM): A DevOps Guide to Reproducible AI Agents
Dev.to

3 Things I Learned Benchmarking Claude, GPT-4o, and Gemini on Real Dev Work
Dev.to

Open Source Contributors Needed for Skillware & Rooms (AI/ML/Python)
Dev.to