How LLMs Distort Our Written Language
arXiv cs.CL / 3/20/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper demonstrates that LLMs can alter not only voice and tone but also the intended semantic meaning of human writing.
- A user study shows heavy LLM use increases the share of essays that remain neutral on the topic by about 70%, indicating reduced stance expression.
- The authors show that asking an LLM to revise based on human feedback can substantially change content meaning, even when edits are limited to grammar changes.
- Analysis of AI-generated peer reviews at a major AI conference finds reviews with higher scores but lower emphasis on clarity and significance, suggesting misalignment with research evaluation.
- The findings argue for future work on how widespread AI-assisted writing will affect culture and scientific institutions due to semantics distortion.
Related Articles

Composer 2: What is new and Compares with Claude Opus 4.6 & GPT-5.4
Dev.to
How UCP Breaks Your E-Commerce Tracking Stack: A Platform-by-Platform Analysis
Dev.to
AI Text Analyzer vs Asking Friends: Which Gives Better Perspective?
Dev.to
[D] Cathie wood claims ai productivity wave is starting, data shows 43% of ceos save 8+ hours weekly
Reddit r/MachineLearning

Microsoft hires top AI researchers from Allen Institute for AI for Suleyman's Superintelligence team
THE DECODER