Can LLMs Infer Conversational Agent Users' Personality Traits from Chat History?
arXiv cs.CL / 4/23/2026
📰 NewsSignals & Early TrendsModels & Research
Key Points
- The study evaluates privacy risks by testing whether an LLM-based conversational agent can expose sensitive personality traits from a user’s chat history.
- Using real ChatGPT logs from 668 participants (62,090 chats), the researchers quantify which types of data and interaction scenarios may enable personality inference.
- They fine-tune RoBERTa-base classifiers to predict personality traits and find performance above random baseline in multiple cases.
- For example, trait inference for extraversion improves by +44% relative to baseline in interaction types involving relationships and personal reflection.
- The results provide a more granular view of how different conversational contexts affect the likelihood and severity of privacy leakage.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

10 AI Tools Every Developer Should Try in 2026
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to