Business Report

Draft AI Policy: Why South Africa must fund local innovators to compete on the global stage

Gcwalisile Khanyile|Published

Solly Malatsi, Minister of Communications and Digital Technologies, whose department is the custodian of the Draft National Policy on AI.

Image: Supplied

A call has been made for South Africa to create a fund that would finance local innovators building tech to compete with global Artificial Intelligence (AI). 

This comes after the Department of Communications and Digital Technologies published the Draft South Africa Artificial Intelligence (AI) Policy for public comments, with the closing date of mid-June 2026. 

The Draft Policy proposes the formation of a new AI governance ecosystem, including a National AI Commission, an AI Ethics Board, an AI Regulatory Authority, an AI Ombudsperson, a National AI Safety Institute, and an AI Insurance Superfund designed to compensate individuals harmed by AI systems in cases where liability is unclear. 

Lars Gumede, a tech expert, AI systems developer, and founder of NowNow, said that the policy does not seek to promote the development of local tech to compete with and catch up with foreign tech; instead, it focuses on partnering with and using global tech. 

He added that top countries are funding any innovators building sovereign and local AI systems to reduce reliance on foreign AI. 

Gumede stated that funding local innovators must be a cornerstone and one of the most important aspects of the draft policy, with partnerships designed to temporarily support and build local expertise.

Explaining how the government should implement this, he said: “Get 5-10 of the top AI minds in South Africa to determine a criterion that will determine who gets funded. Then use those AI start-ups financed through this fund for local initiatives and AI integration. Such a thing can be done immediately,” he stated.

Gumede expressed concern that the draft policy is overly complex, with too many components, making it unrealistic for proper implementation.

“The policy calls for the creation of so many offices, partnerships, and aims to ‘foster and support’ so many things that the concerned departments will be juggling a thousand plates each. Instead, cut down to three to five clearly defined initiatives and responsibilities per department. Make each plan clear, funded, and transparent,” he said.

Gumede stated that each plan must be concrete, not vague. “Instead of ‘fostering inclusive development of AI’, each department should have a clear and measurable task.”

Gumede stated that one of the major issues is that the full implementation of the policy is scheduled for 2028. 

“It needs to be implemented now. If the implementation only starts in 2028, then the first results and setup may only be in 2029. Meanwhile, globally, the top countries have already started implementing.” 

Gumede urged the government to start implementing crucial basic initiatives, such as funding local AI startups now, as the country cannot afford to wait until 2028.

Professor Donrich Thaldar and Dr Siddharthiya Pillay, legal experts in health, technology, and AI, from the University of KwaZulu-Natal’s School of Law, said the policy marks a significant milestone in the country’s AI governance journey.

The experts said the draft policy identifies healthcare as one of four critical sectors for AI implementation, alongside education, agriculture, and public administration.

“The publication of South Africa’s Draft National AI Policy is a welcome and overdue development. From a healthcare perspective, the draft policy gets several things right. It identifies healthcare as a critical sector, adopts an insurance superfund for AI harm, addresses bias and workforce concerns, and promotes innovation through regulatory sandboxes and centres of excellence. These are significant advances, and where they align with proposals made by the academic community, this should be acknowledged,” Thaldar and Pillay stated.

However, the experts flagged four aspects that require ‘substantive improvements’, as the AI Insurance Superfund, blanket Human-in-the-loop (HITL) requirement, moving beyond POPIA to address health data ownership, and amendment of the Medicines and Related Substances Act to widen the definition of ‘medical device’ to encompass AI software used in healthcare settings. 

On liability, the draft policy proposes an AI Insurance Superfund modelled on the Road Accident Fund (RAF) to compensate individuals harmed by AI-driven outcomes.

The experts described it as a significant step in the right direction, but cautioned that a compensation fund, standing alone, is insufficient. 

“In a comprehensive analysis of the core legal concepts relevant to liability for AI harm in healthcare, Bottomley and Thaldar demonstrated that traditional fault-based liability, product liability, and strict liability each face fundamental difficulties when applied to AI systems. Black-box algorithms make it virtually impossible to establish causation and fault; the dynamic learning nature of AI deviates from static product definitions, and strict liability risks subjecting stakeholders to material burdens without a fair opportunity to avoid them,” the experts stated.

Thaldar and Pillay recommended that the AI Insurance Superfund must be accompanied by a reconciliation-based dispute resolution institution, as without it, the fund is a blunt instrument that lacks capacity for systemic learning. 

The experts stated that the blanket Human-in-the-loop (HITL) requirement must be tempered by recognition that in low-resource healthcare settings, AI fills gaps rather than complements existing expertise, and alternative oversight mechanisms, such as pre-deployment certification, are more appropriate. 

“AI in rural and remote healthcare settings in South Africa aims to provide diagnostic and treatment support where human specialists are unavailable. However, requiring that a medical expert be in the loop at all times defeats the purpose of deploying AI in these settings,” the experts said.

Thaldar and Pillay stated that the draft policy’s exclusive reliance on the Protection of Personal Information Act (POPIA) is insufficient; it must move beyond POPIA to explicitly address health data ownership and encourage research institutions to claim ownership of newly generated data instances to provide legal certainty and enable innovation. 

“The draft policy should recommend amendment of the Medicines and Related Substances Act to widen the definition of ‘medical device’ to encompass AI software used in healthcare settings. This, by building on South African Health Products Regulatory Authority’s (SAHPRA) recent regulatory guidance for Artificial Intelligence and Machine Learning (AI/ML)-enabled medical devices,” Thaldar and Pillay said.

On the healthcare workforce, the draft policy extensively addresses education and reskilling. It provides for the integration of AI education from foundational to tertiary levels, the establishment of specialised training programmes, and the creation of reskilling programmes for workers in AI-impacted sectors, the experts said.

Thaldar and Pillay said that on innovation, the draft policy provides for AI Centres of Excellence, accelerators, and regulatory sandboxes, as well as support for startups through grants and compute credits. 

“While these are positive steps, the draft policy does not specifically adopt a proposal for a public-sector health data institution working in tandem with the patient electronic health record system. This is a missed opportunity, as access to high-quality, representative health data remains a critical bottleneck for AI innovation in healthcare,” the experts stated. 

They added that AI health interventions in developing countries frequently fail to scale after donor funding ends because of governance gaps; they suggested that a reconciliation commission would provide an institutional anchor that could support the transition from pilot to scale by offering a predictable, learning-oriented dispute resolution pathway.

“The draft policy should, therefore, be amended to provide not only for the AI Insurance Superfund, but also for a sui generis dispute resolution institution that administers the fund, resolves disputes through reconciliation rather than litigation, and develops evidence-based guidelines. Without this institutional counterpart, the insurance fund risks becoming an administratively burdensome mechanism with no capacity for systemic learning,” Thaldar and Pillay stated.

gcwalisile.khanyile@inl.co.za