Can LLMs Infer Conversational Agent Users' Personality Traits from Chat History?

arXiv cs.CL / 4/23/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The study evaluates privacy risks by testing whether an LLM-based conversational agent can expose sensitive personality traits from a user’s chat history.
  • Using real ChatGPT logs from 668 participants (62,090 chats), the researchers quantify which types of data and interaction scenarios may enable personality inference.
  • They fine-tune RoBERTa-base classifiers to predict personality traits and find performance above random baseline in multiple cases.
  • For example, trait inference for extraversion improves by +44% relative to baseline in interaction types involving relationships and personal reflection.
  • The results provide a more granular view of how different conversational contexts affect the likelihood and severity of privacy leakage.

Abstract

Sensitive information, such as knowledge about an individual's personality, can be can be misused to influence behavior (e.g., via personalized messaging). To assess to what extent an individual's personality can be inferred from user interactions with LLM-based conversational agents (CAs), we analyze and quantify related privacy risks of using CAs. We collected actual ChatGPT logs from N=668 participants, containing 62,090 individual chats, and report statistics about the different types of shared data and use cases. We fine-tuned RoBERTa-base text classification models to infer personality traits from CA interactions. The findings show that these models achieve trait inference with accuracy (ternary classification) better than random in multiple cases. For example, for extraversion, accuracy improves by +44% relative to the baseline on interactions for relationships and personal reflection. This research highlights how interactions with CAs pose privacy risks and provides fine-grained insights into the level of risk associated with different types of interactions.