Authorship Impersonation via LLM Prompting does not Evade Authorship Verification Methods
arXiv cs.CL / 4/1/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The study tests whether prompt-based LLMs (using GPT-4o) can create convincing author impersonations and whether those texts can evade existing authorship verification (AV) systems.
- Impersonation attempts were generated under four prompting conditions across three genres—emails, text messages, and social media posts.
- Evaluations against multiple non-neural and neural AV methods using a likelihood-ratio framework found that LLM outputs did not sufficiently replicate individual authorial signatures to bypass established systems.
- Some AV methods rejected LLM impersonation texts more accurately than genuine negative samples, suggesting AV systems can effectively distinguish impersonations.
- The paper attributes the resilience in part to lexical diversity and higher entropy in LLM-generated text, which may weaken impersonation mimicry.
Related Articles

Knowledge Governance For The Agentic Economy.
Dev.to

AI server farms heat up the neighborhood for miles around, paper finds
The Register
Does the Claude “leak” actually change anything in practice?
Reddit r/LocalLLaMA

87.4% of My Agent's Decisions Run on a 0.8B Model
Dev.to

AIエージェントをソフトウェアチームに変える無料ツール「Paperclip」
Dev.to