Beyond Via: Analysis and Estimation of the Impact of Large Language Models in Academic Papers

arXiv cs.CL / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The study analyzes arXiv papers and finds measurable shifts in academic writing that appear consistent with LLM influence, including more frequent use of “beyond” and “via” in titles and less frequent use of “the” and “of” in abstracts.
  • It reports that current text classifiers have difficulty identifying which specific LLM produced a text, suggesting limitations in multi-class attribution despite general LLM-driven stylistic signals.
  • The authors show that different LLMs (and even prompt variations) lead to evolving word-usage patterns over time, making the effect on writing both heterogeneous and dynamic.
  • Using a direct, highly interpretable linear approach that accounts for model and prompt differences, the paper provides quantitative estimates of these impacts rather than relying only on qualitative observations.

Abstract

Through an analysis of arXiv papers, we report several shifts in word usage that are likely driven by large language models (LLMs) but have not previously received sufficient attention, such as the increased frequency of "beyond" and "via" in titles and the decreased frequency of "the" and "of" in abstracts. Due to the similarities among different LLMs, experiments show that current classifiers struggle to accurately determine which specific model generated a given text in multi-class classification tasks. Meanwhile, variations across LLMs also result in evolving patterns of word usage in academic papers. By adopting a direct and highly interpretable linear approach and accounting for differences between models and prompts, we quantitatively assess these effects and show that real-world LLM usage is heterogeneous and dynamic.