Is AI Catching Up to Human Expression? Exploring Emotion, Personality, Authorship, and Linguistic Style in English and Arabic with Six Large Language Models

arXiv cs.CL / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tests six large language models (Jais, Mistral, LLaMA, GPT-4o, Gemini, DeepSeek) to see whether they can emulate human-like emotion, personality, and stylistic cues in English and Arabic.
  • Classifiers can reliably distinguish human-authored from AI-generated text overall (F1 > 0.95), but performance drops on paraphrased samples, implying reliance on superficial stylistic signals.
  • Experiments on emotion (English) and personality markers (Arabic) show significant generalization gaps: classifiers trained on human data struggle on AI text and vice versa, suggesting LLMs encode affective information differently than humans.
  • For under-resourced Arabic, adding AI-generated data to training improves Arabic personality classification performance, indicating synthetic data could help bridge evaluation gaps.
  • Model comparisons suggest GPT-4o and Gemini produce better “affective coherence,” while linguistic/psycholinguistic analyses find measurable differences in tone, authenticity, and textual complexity that matter for authorship attribution and responsible AI deployment.

Abstract

The advancing fluency of LLMs raises important questions about their ability to emulate complex human traits, including emotional expression and personality, across diverse linguistic and cultural contexts. This study investigates whether LLMs can convincingly mimic emotional nuance in English and personality markers in Arabic, a critical under-resourced language with unique linguistic and cultural characteristics. We conduct two tasks across six models:Jais, Mistral, LLaMA, GPT-4o, Gemini, and DeepSeek. First, we evaluate whether machine classifiers can reliably distinguish between human-authored and AI-generated texts. Second, we assess the extent to which LLM-generated texts exhibit emotional or personality traits comparable to those of humans. Our results demonstrate that AI-generated texts are distinguishable from human-authored ones (F1>0.95), though classification performance deteriorates on paraphrased samples, indicating a reliance on superficial stylistic cues. Emotion and personality classification experiments reveal significant generalization gaps: classifiers trained on human data perform poorly on AI-generated texts and vice versa, suggesting LLMs encode affective signals differently from humans. Importantly, augmenting training with AI-generated data enhances performance in the Arabic personality classification task, highlighting the potential of synthetic data to address challenges in under-resourced languages. Model-specific analyses show that GPT-4o and Gemini exhibit superior affective coherence. Linguistic and psycholinguistic analyses reveal measurable divergences in tone, authenticity, and textual complexity between human and AI texts. These findings have implications for affective computing, authorship attribution, and responsible AI deployment, particularly within underresourced language contexts where generative AI detection and alignment pose unique challenges.