Business Report

A Development-First Critique of South Africa's Withdrawn AI Policy

Dr Alexandre Essomme|Published

The withdrawal of Gazette No. 54477 is, in the final analysis, a moment of institutional accountability that South Africa should take seriously and move on from quickly, writes Dr Alexandre Essome.

Image: Supplied

A Necessary Withdrawal, and an Unfinished Conversation

On April 26, 2026, Minister Solly Malatsi announced the withdrawal of the Draft National Artificial Intelligence Policy published for public comment in Government Gazette No. 54477.

The stated reason was unambiguous: the document contained fictitious sources in its reference list, the most plausible explanation being that AI-generated citations were incorporated without proper verification. The Minister was right to act decisively. A national AI policy whose own evidentiary foundation is compromised by precisely the kind of AI governance failure it was meant to address is not merely an embarrassment. It is a structural contradiction that would have undermined the document's authority from the moment of its enactment. The Centre for Artificial Intelligence and Sustainable Development (CAISD) notes, with candour, that our own review of Gazette No. 54477 had flagged irregularities in portions of the reference architecture. We therefore welcome the withdrawal of the document draft policy and commend the ministry for urgent action.

The withdrawal of this flawed draft should not silence this urgent national conversation. South Africa's need for a credible, development-oriented AI governance framework should not be diminished by a single policy failure. If anything, the manner of that failure, a government document corrupted by unverified AI outputs in a policy designed to regulate AI, illustrates with painful precision why the governance imperatives identified in CAISD's advisory submission remain pressing. We submit this analysis as a contribution to the redrafting process and direct it to the substantive policy architecture that the next draft must contain.

The Developmental Imperative Cannot Wait

The conceptual foundation of Gazette No. 54477, notwithstanding its referencing failures, contained genuine insight. Its philosophical grounding in Ubuntu, its insistence that AI must serve the community rather than merely maximise corporate efficiency, and its proposal for an AI Insurance Superfund modelled on the Road Accident Fund represented distinctive contributions to global AI governance discourse. These ideas deserve to be rescued from the wreckage of a poorly quality-assured drafting process and carried forward into the revised document with greater rigour and stronger enforcement architecture.

The central argument of CAISD's advisory position is structural rather than rhetorical. For South Africa, AI governance designed primarily as a risk-management exercise is a strategic error. The OECD AI Principles, updated by the OECD Ministerial Council in May 2024, are explicit on this point that governments must invest in AI for public benefit while building governance environments that ensure equitable distribution of AI's gains and adequate protection of citizens from its harms (OECD, 2024). A development-first framework does not abandon governance risks but calibrates regulatory strictness to the nature and severity of potential harm rather than applying precautionary restrictions that impose compliance costs on local innovators without protecting the citizens most exposed to AI-driven disruption.

The AI system that misdiagnoses a patient in a public hospital, the algorithm that denies a social grant application processed by SASSA, the automated credit-scoring model that reproduces apartheid-era spatial inequality in lending decisions are not abstract governance concerns. They are the specific harms that a development-first framework must anticipate and prevent, while simultaneously deploying AI in precisely these same domains to improve diagnostic accuracy, reduce administrative backlogs, and expand financial inclusion. The revised policy must be architecturally equipped to do both.

CAISD identified ten discrete governance gaps in Gazette No. 54477 relative to the standards established by verified international frameworks. Each gap is referenced below against sources that have been confirmed as genuine.

Robust Data Governance Mechanisms

The Draft National AI Policy should place robust data governance at the centre of its implementation architecture, as trusted AI systems depend fundamentally on the quality, integrity, fairness, and lawful use of data. In this regard, the policy should expressly strengthen bias-mitigation mechanisms through sustained investment in locally relevant, representative datasets that reflect South Africa’s demographic, linguistic, and socio-economic realities. Equally, it should require explainability standards for high-risk AI applications to ensure that automated decisions affecting citizens can be understood, interrogated, and challenged where necessary. These measures must be firmly aligned with the Constitution, particularly the rights to equality, dignity, just administrative action, and privacy, while ensuring full compliance with the Protection of Personal Information Act (POPIA). A strong data governance framework will not only protect the public interest but also enhance trust, legitimacy, and long-term adoption of AI across both the public and private sectors.

The withdrawn draft treated all AI as a single regulatory category. Singapore's Infocomm Media Development Authority, in collaboration with the AI Verify Foundation, finalised the Model AI Governance Framework for Generative AI in May 2024, establishing nine governance dimensions specifically designed for large language models, deepfakes, and synthetic content (IMDA & AI Verify Foundation, 2024). The revised South African policy requires a dedicated generative AI chapter with mandatory transparency disclosures and content provenance requirements, particularly urgent given the country's multilingual digital environment.

Algorithmic Impact Assessments

The EU AI Act, formally adopted in 2024, requires fundamental rights impact assessments before high-risk AI systems are deployed (European Parliament, 2024). Canada's Directive on Automated Decision-Making requires equivalent assessments for all federal government automated decision systems. South Africa's revised policy must mandate pre-deployment assessments for public sector AI, beginning with SASSA's grant administration and the South African Police Service's use of predictive analytics.

Right to challenge AI decisions

The 2024 OECD update to Principle 1.3 on Transparency and Explainability reframed the governance standard from enabling individuals to understand AI decisions to enabling them to actively challenge those decisions (OECD, 2024). This shift is constitutionally grounded in South Africa in Sections 33 and 34 of the Constitution, covering just administrative action and access to courts, respectively. A statutory right to contest AI-driven decisions, routed through the proposed AI Ombudsperson, must appear in the revised draft.

AI sovereignty and sovereign compute

The draft's aspiration for regional AI factories requires structural enforcement. Without defined domestic ownership thresholds, minimum compute capacity targets, and prohibitions against foreign hyperscalers operating under local branding, these factories risk becoming another iteration of structural dependency dressed in developmental language. The revised policy requires a sovereign AI capability roadmap with measurable targets, including a strategy for accessing advanced semiconductors amid tightening global export controls.

Green energy co-investment

OECD Principle 1.1, in its 2024 formulation, explicitly addresses environmental sustainability as a core dimension of trustworthy AI, acknowledging the significant and growing energy footprint of large-scale AI systems (OECD, 2024). The EU AI Act requires energy consumption disclosure for large AI models. South Africa's revised policy must mandate binding green energy co-investment requirements for all AI factories and data centres, making AI infrastructure development a lever for renewable energy expansion rather than an additional burden on a coal-dependent grid during the Just Energy Transition.

Remaining gaps

Five further governance deficits require attention: a mandatory AI incident reporting regime modelled on POPIA's breach notification framework; a supply chain accountability map specifying minimum duties across the AI development and deployment chain; a SANAS-accredited conformity assessment pathway for high-risk AI systems; a National AI Procurement Policy governing government AI tenders; and a formal SME support regime with differential compliance timelines to prevent regulatory architecture from entrenching the market dominance of large foreign technology firms at the expense of local innovators.

The Human Imperative: Building an AI-Productive Nation

Beyond institutional architecture, the most consequential long-term investment South Africa can make is in the human capacity to produce, govern, and critically interrogate AI systems.

The withdrawn draft's treatment of talent development was its most substantively developed thematic area, and it is the dimension most worth preserving and strengthening in the revised document. The country has more than twenty million people under the age of thirty-five; an unemployment rate above thirty percent among youth; and a structural mismatch between the skills the economy currently rewards and those an AI-transformed economy will require. The distance between producing passive AI consumers and active AI producers is, in this context, a development variable of first-order importance.

The revised policy must move beyond aspirational language on talent development to specify a National AI Skills Framework with competency standards by schooling phase, funded youth AI innovation programmes with measurable targets, and a legislated social dialogue mechanism, housed within NEDLAC, for managing AI-driven labour market disruption. The OECD (2024) is clear that fair labour market transitions require structured social dialogue, reskilling programmes, and social protection for displaced workers; these are not peripheral concerns in a country with South Africa's employment structure. They are the conditions under which an AI governance framework can credibly claim to serve the people it governs.

The withdrawal of Gazette No. 54477 is, in the final analysis, a moment of institutional accountability that South Africa should take seriously and move on from quickly.

* Dr. Essome is a trained journalist holding a PhD in Operations Management from the University. He founded the Sahara Foundation, a US-based 501(c) non-profit, and is also the Co-Chair of the Centre for Artificial Intelligence and Sustainable Development (CAISD).

** The views expressed do not necessarily reflect the views of IOL or Independent Media.