ChatGPT's goblin obsession may be hilarious, but it points to a deeper problem in AI training

THE DECODER / 5/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • ChatGPT was observed to frequently insert goblins, gremlins, and other mythical creatures into its responses at an unexpectedly high rate.
  • OpenAI attributes the behavior to a faulty or poorly tuned reward signal during training, which can create unintended “side effects.”
  • The incident is presented as a cautionary example that small issues in training incentives can strongly shape model outputs in surprising ways.
  • The article frames the “goblin obsession” as humorous on the surface but indicative of deeper challenges in aligning AI training objectives with desired behavior.

A faulty reward signal during training caused ChatGPT models to start dropping goblins, gremlins, and other mythical creatures into their answers at a surprising rate. OpenAI says it's an example of how small, poorly tuned training incentives can produce unexpected side effects.

The article ChatGPT's goblin obsession may be hilarious, but it points to a deeper problem in AI training appeared first on The Decoder.