South Africa yanks AI policy after AI-assisted drafting invents citations

The Register / 4/28/2026

📰 NewsSignals & Early TrendsIndustry & Market MovesModels & Research

Key Points

  • South Africa has withdrawn an AI policy document after an AI-assisted drafting process produced references that were not actually valid.
  • The incident highlights the risk of hallucinated or fabricated citations when generative AI is used to create or support official documents.
  • Authorities are effectively reversing or pausing the policy effort, underscoring the need for human verification and audit trails in AI-assisted governance.
  • The episode is sparking broader debate about how governments should regulate AI use, especially for tasks involving legal or factual claims.

South Africa yanks AI policy after AI-assisted drafting invents citations

Eish shame man! Maybe you shouldn't ask AI to set the rules for AI use?

Mon 27 Apr 2026 // 17:24 UTC

South Africa has pulled its draft national AI policy after discovering that it was citing sources that exist only in the fertile imagination of a chatbot.

The country's Department of Communications and Digital Technologies confirmed over the weekend that the draft, which had already cleared Cabinet and was out for public comment, included "various fictitious sources" in its reference list. 

Communications minister Solly Malatsi said the department rechecked the draft after reports flagged fake references and found some citations were indeed made up, prompting its withdrawal. "This failure is not a mere technical issue but has compromised the integrity and credibility of the draft policy," he said in a post on X, adding that AI-generated citations appear to have slipped in without anyone checking them.

The document has now been yanked, and Malatsi said that those involved in drafting and sign-off can expect "consequence management." 

"This unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical. It's a lesson we take with humility," Malatsi said. "I want to reassure the country that we are treating this matter with the gravity it deserves."

The now-defunct policy was sold as a forward-looking framework, full of talk about "intergenerational equity" and AI benefiting current and future generations. It's now best known for a references section that doesn't hold up.

Local outlet News24 had reported that at least six references in the report were fabricated, with experts saying that the errors matched classic AI hallucinations: convincing on the surface, entirely made up underneath. 

Following the publication of News24's report, Khusela Sangoni-Diko, chair of the parliamentary portfolio committee overseeing the department, publicly told Malatsi to pull the document before it caused further embarrassment. She also suggested that the redraft skip "using ChatGPT this time," adding that the government should stop looking for a scapegoat, or "scape-bot."

All in all, it's a great look for a government trying to set the rules on AI when its own policy can't clear a basic fact check. And it's not exactly a one-off either. As The Register reported last year, Deloitte had to help clean up a government report in Australia after AI-generated citations and even a made-up court quote slipped through, a reminder that letting the machine do the writing is one thing, checking it is another.

South Africa has now learned that lesson the hard way. When your national AI policy cannot tell real sources from imaginary ones, it is probably not ready to regulate anyone else's machines. ®

More about

TIP US OFF

Send us news