Temporal Flattening in LLM-Generated Text: Comparing Human and LLM Writing Trajectories
arXiv cs.CL / 4/15/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates whether LLMs can reproduce the longitudinal “trajectory” of human writing across long time spans when deployed in stateless or history-conditioned interaction settings.
- It introduces a released longitudinal dataset covering 412 human authors and 6,086 documents from 2012–2024 across academic abstracts, blogs, and news, and generates comparable trajectories using three representative LLMs.
- Using drift and variance metrics over semantic, lexical, and cognitive-emotional representations, the study finds “temporal flattening” in LLM outputs: LLMs show less semantic and cognitive-emotional change over time than humans.
- Although LLM-generated text has greater lexical diversity, the reduced semantic and emotional drift makes temporal-variability patterns highly predictive for distinguishing human vs. LLM trajectories (94% accuracy, 98% ROC-AUC).
- The authors conclude that this temporal-flattening gap persists even when models use incremental history, with implications for synthetic training data quality and longitudinal text modeling.
Related Articles
langchain-anthropic==1.4.1
LangChain Releases

🚀 Anti-Gravity Meets Cloud AI: The Future of Effortless Development
Dev.to

Stop burning tokens on DOM noise: a Playwright MCP optimizer layer
Dev.to

Talk to Your Favorite Game Characters! Mantella Brings AI to Skyrim and Fallout 4 NPCs
Dev.to

AI Will Run Companies. Here's Why That Should Excite You, Not Scare You.
Dev.to