Wikipedia cracks down on the use of AI in article writing

TechCrunch / 3/27/2026

📰 NewsSignals & Early TrendsIndustry & Market Moves

Key Points

  • Wikipedia has updated its policy to prohibit editors from using large language models (LLMs) to generate or rewrite Wikipedia article content.
  • The change clarifies earlier guidance that was more permissive (e.g., discouraging generation of new articles from scratch), making the rule against AI-written content more explicit.
  • The ban does not remove all AI involvement: editors may use LLMs to suggest basic copyedits to their own writing if any changes are reviewed by humans.
  • Wikipedia cautions that LLMs can alter meaning or add unsupported claims beyond the requested edits, even when used for editing assistance.
  • The policy was approved by a community vote reported as 40–2 in favor, reflecting strong support among editors despite ongoing controversy around AI use.

As AI makes inroads into the worlds of editorial and media, websites are scrambling to establish ground rules for its usage. This week, Wikipedia banned the use of AI-generated text by its editors — although it stopped short of banning AI outright from the site’s editorial processes.

In a recent policy change, the site now states that “the use of LLMs to generate or rewrite article content is prohibited.” This new language updates and clarifies previous, vaguer language that stated that LLMs “should not be used to generate new Wikipedia articles from scratch.”

AI in Wikipedia articles has become a contentious issue among the site’s sprawling, volunteer-driven community of editors. 404 Media reports that the new policy, which was put to a vote by the site’s editors, garnered majority support — 40 to 2.

That said, the new policy still makes room for continued AI use in some editorial processes.

“Editors are permitted to use LLMs to suggest basic copyedits to their own writing, and to incorporate some of them after human review, provided the LLM does not introduce content of its own,” the new policy states. “Caution is required, because LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”