South Africa has decided to withdraw its draft national AI policy after it was found that some of the rules were AI-generated, citing fictional sources.
Communications minister, Solly Malatsi, withdrew the draft policy after finding that at least 6 of its 67 academic citations were AI-generated hallucinations that cited journal articles that don’t exist.
“The most plausible explanation is that AI-generated citations were included without proper verification. ​This should not have happened,” Mr Malatsi said.
“This ⁠failure is not a mere technical issue but ​has compromised the integrity and credibility of the ​draft policy,” he wrote in a post on X.
The draft policy was unveiled for public comments and sought to position the country as a leader in AI innovation, while attempting to address ethical, social, and ‌economic ⁠challenges with AI use.
It laid out plans to establish new institutions in the country to oversee AI use, including a national AI commission, an AI ethics board, and an AI regulatory authority.

Minister of Communications and Digital Technologies of South Africa Solly Malatsi (R) speaks (AFP via Getty Images)
The draft rules also outlined plans for tax ⁠breaks, grants, and subsidies to encourage private-sector collaboration in building AI infrastructure in the country.
It is expected to be revised before it is reissued for public comment.
The issue came to light when South Africa’s News24 found that at least six of the document’s 67 academic citations did not exist, while the journals they referenced were real.
Editors of the journals, including the South African Journal of Philosophy, AI & Society, and the Journal of Ethics and Social Philosophy, confirmed independently that the cited articles were fake.
The communications minister said there would be consequences for those responsible for drafting the ​policy.
“This unacceptable lapse proves why vigilant ⁠human ​oversight over the use of ​artificial intelligence is critical. It’s a lesson we take with humility,” ​he wrote on X.
It highlights the growing issue of academics and administrators using generative AI for research and drafting.
A study published in the journal Nature found that over 2.5 per cent of academic papers published in 2025 contained at least one potentially hallucinated citation, compared to just 0.3 per cent in 2024.
That amounts to over 110,000 papers published in 2025 containing invalid references, “hallucinated” by AI.
These are confident but fabricated outputs generated by AI models when it senses that its data is lacking in that domain.
Large language models like OpenAI’s ChatGPT and Google’s Gemini are designed to predict the next likely word in a string of words and not specifically to check for truth.
So if it finds that data is lacking in a domain, the AI model fills gaps with plausible-sounding but wrong information.
An AI model uses its training data to predict what a citation looks like and produces some references that sound authoritative, but don’t really exist.
This points to the growing need for careful human oversight of AI responses, especially when it is being used by academics and authorities.