Behavioral Transfer in AI Agents: Evidence and Privacy Implications

arXiv cs.AI / 4/23/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether LLM-powered AI agents behave like “behavioral extensions” of their human owners rather than producing generic outputs.
  • Using 10,659 matched human–agent pairs from the social media platform Moltbook, the study compares agents’ posts to owners’ Twitter/X activity across topics, values, affect, and linguistic style, finding systematic behavioral transfer.
  • The observed alignment persists even when agents are not explicitly configured, and similarity in one behavioral dimension tends to co-occur with similarity in others.
  • The authors show that stronger behavioral transfer correlates with a higher likelihood that agents publicly disclose owner-related personal information, creating privacy risks during normal use.
  • Overall, the findings suggest agentic systems can propagate owner-specific behavioral heterogeneity into digital environments, with implications for privacy, platform design, and governance of AI agents.

Abstract

AI agents powered by large language models are increasingly acting on behalf of humans in social and economic environments. Prior research has focused on their task performance and effects on human outcomes, but less is known about the relationship between agents and the specific individuals who deploy them. We ask whether agents systematically reflect the behavioral characteristics of their human owners, functioning as behavioral extensions rather than producing generic outputs. We study this question using 10,659 matched human-agent pairs from Moltbook, a social media platform where each autonomous agent is publicly linked to its owner's Twitter/X account. By comparing agents' posts on Moltbook with their owners' Twitter/X activity across features spanning topics, values, affect, and linguistic style, we find systematic transfer between agents and their specific owners. This transfer persists among agents without explicit configuration, and pairs that align on one behavioral dimension tend to align on others. These patterns are consistent with transfer emerging through accumulated interaction between owners (or owners' computer environments) and their agents in everyday use. We further show that agents with stronger behavioral transfer are more likely to disclose owner-related personal information in public discourse, suggesting that the same owner-specific context that drives behavioral transfer may also create privacy risk during ordinary use. Taken together, our results indicate that AI agents do not simply generate content, but reflect owner-related context in ways that can propagate human behavioral heterogeneity into digital environments, with implications for privacy, platform design, and the governance of agentic systems.