Identifying the Periodicity of Information in Natural Language
arXiv cs.CL / 4/27/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper asks whether natural language contains periodic patterns in how information is encoded and measured via surprisal.
- It introduces “AutoPeriod of Surprisal (APS),” a method that applies a canonical periodicity-detection algorithm to the surprisal sequence within a single document.
- Experiments on multiple corpora suggest that a substantial portion of human language exhibits strong information periodicity.
- The study also finds additional significant periods that do not align with typical text structural units (like sentence boundaries) and supports them using harmonic regression.
- It concludes that observed periodicity arises from both structured linguistic factors and longer-range drivers, and discusses potential uses for detecting LLM-generated text.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Everyone Wants AI Agents. Fewer Teams Are Ready for the Messy Business Context Behind Them
Dev.to
AI 编程工具对比 2026:Claude Code vs Cursor vs Gemini CLI vs Codex
Dev.to

How I Improved My YouTube Shorts and Podcast Audio Workflow with AI Tools
Dev.to

An improvement of the convergence proof of the ADAM-Optimizer
Dev.to