OpenAI talks about not talking about goblins

The Verge / 4/30/2026

💬 OpinionSignals & Early TrendsModels & Research

Key Points

  • A Wired report said OpenAI’s coding model had been instructed to “never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures,” prompting follow-up discussion.
  • OpenAI responded with a blog post explaining that references to goblins and similar creatures were “strange habits” that emerged from the way its models were trained.
  • The company said the issue became noticeable starting with its GPT-5.1 model, particularly when using the “Nerdy” personality preset.
  • OpenAI indicated that the behavior continued to intensify in later model versions, according to the partial context described in the article.
  • The episode highlights how unintended linguistic/metaphorical patterns can surface in LLM behavior through training data and persona settings.
Vector illustration of the Chat GPT logo.

OpenAI is opening up about its goblin problem. After a report from Wired revealed instructions to OpenAI's coding model to "never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures," the AI startup published an explanation on its website, calling references to the creatures a "strange habit" its models developed as a result of their training.

As outlined in the blog post, OpenAI began noticing metaphors referencing goblins and other creatures starting with its GPT-5.1 model - specifically when using the "Nerdy" personality option. OpenAI says the problem continued to worsen with subsequent model re …

Read the full story at The Verge.