Interpretable Stylistic Variation in Human and LLM Writing Across Genres, Models, and Decoding Strategies
arXiv cs.CL / 4/16/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper reports a large-scale study comparing stylistic variation in human writing and outputs from 11 LLMs across 8 genres and 4 decoding strategies using Douglas Biber lexicogrammatical/functional features.
- It finds that several linguistic markers distinguishing LLM-generated text from human text are largely robust to generation conditions such as prompting settings and style-continuation access.
- The study shows that genre has a stronger impact on stylistic features than whether the text source is human or machine, emphasizing content domain over origin.
- It observes that chat variants of models tend to cluster together in stylistic representation space.
- Finally, it concludes that the specific model identity generally influences style more than decoding strategy, with limited exceptions, offering guidance for more intentional LLM usage.
Related Articles

Introducing Claude Opus 4.7
Anthropic News

Who Audits the Auditors? Building an LLM-as-a-Judge for Agentic Reliability
Dev.to

"Enterprise AI Cost Optimization: How Companies Are Cutting AI Infrastructure Sp
Dev.to

Config-first code generator to replace repetitive AI boilerplate — looking for feedback and collaborators
Dev.to

The US Government Fired 40% of an Agency, Then Asked AI to Do Their Jobs
Dev.to