
4 May 2026
The Withdrawal of South Africa’s Draft Artificial Intelligence Policy: An Opportunity?
On 10 April 2026, South Africa's Department of Communications and Digital Technologies published the Draft South Africa National Artificial Intelligence Policy for comment. It was heralded as a significant milestone in establishing a formal Artificial Intelligence regulatory framework for the country. One that would lead to South Africa joining the few other African countries, such as Egypt, Rwanda, Mauritius and Zambia, that have adopted national policies on AI. This step was also in line with the call to action to member states of the African Union in terms of its Continental AI Strategy published in July 2024.
However, barely two weeks after publication, on 26 April 2026, the South African government withdrew the Draft Policy after it emerged that at least 10% of the references to academic sources in the document's reference list were fictitious. This suggested that the Draft Policy was generated by the very AI tools it sought to govern, but without proper verification. The principle of "vigilant human oversight" that the Draft Policy championed was not applied by the drafters.
This demonstrates the point: while AI offers opportunities for innovation and growth, it also exposes grey areas in the law, and introduces risks such as misinformation, privacy violations, copyright infringement and the risk of increased criminal activity. The Draft Policy sought to address these very challenges but exhibited shortcomings, even before the AI hallucination scandal emerged.
The withdrawal of the Draft Policy now presents an opportunity for the South African government to revisit and refine the content of the policy:
- First, government support for and incentivisation of AI development is key to ensuring that the country is positioned as an innovation hub and driving economic growth in South Africa. The revised AI policy should be more responsive to this opportunity.
- Second, the still-to-be-developed regulatory framework is key to success in the responsible adoption of AI in South Africa. The existing legislative framework should be leveraged, where possible, to avoid multiple unnecessary or duplicate sources of regulation. New regulatory requirements must be technically informed, clear and enforceable, without over-regulation.
- Third, governance gaps are evident. Robust verification of AI-generated information is essential, and a baseline requirement of good governance. The general public must be protected against the irresponsible use of AI.
- Fourth, the Draft Policy’s outcomes depended on infrastructure that requires significant investment. Public-private partnerships offer a viable solution to meeting this need. However, public procurement processes will need to be streamlined or managed carefully, if South Africa is to respond at the required speed.
- Fifth, businesses operating across borders must prepare for complexity and may have to adapt, as different countries may have different regulatory requirements. AI systems may need to be adapted to meet the South African regulatory requirements.
- Sixth, employment law considerations may influence how and where organisations deploy AI. South Africa’s pro-employee policy stance will need to be factored into how AI is introduced into a business and how its possible effect on job losses is managed. Employees will need to be reskilled as part of this process.
Overview of the Draft Policy
The Draft Policy anchored AI governance in the Constitution including the Bill of Rights. It aimed to ensure that AI systems did not infringe rights to equality, dignity, privacy, freedom of expression, administrative justice, access to information, or fair labour practices. It also recognized AI as a powerful tool to improve service delivery and support economic and social development.
The Draft Policy, once adopted, was also intended to give effect to existing South African laws, including the Protection of Personal Information Act (POPIA), the Promotion of Access to Information Act (PAIA), the Electronic Communications and Transactions Act, the Electronic Communications Act, the Technology Innovation Act, the Cybercrimes Act, and intellectual property legislation.
Its drafters drew inspiration from global policy and regulatory frameworks such as the European Union’s (EU) AI Act, the United Nations Educational, Scientific and Cultural Organization (UNESCO) Ten Core Principles, and the Organization for Economic Co-operation and Development (OECD) Recommendation of the Council on AI. Distinctively, however, the Draft Policy was centred in the African context through the acknowledgment of Ubuntu, putting people at the centre of AI and ensuring that humans remain responsible for decisions made with AI support.
The Draft Policy’s intended outcomes included:
- increased uptake of AI technologies across sectors;
- enhanced institutional capacity for AI governance and regulation;
- greater local AI innovation;
- equitable access to AI; and
- stronger national positioning in the global AI discourse.
To achieve these outcomes, the Draft Policy was structured around six pillars:
- Capacity and talent development (building AI skills and ethics training);
- AI for inclusive growth and job creation (supporting startups, small businesses, and industrial development);
- Responsible governance (risk-based safeguards and clear accountability);
- Ethical and inclusive AI (constitutional values and redress mechanisms);
- Cultural preservation and international integration (protecting indigenous heritage while aligning with global standards); and
- Human-centred deployment (keeping humans in control and ensuring AI decisions can be explained).
These pillars were underpinned by six principles for responsible AI. These were: fairness; reliability and safety; privacy and security; inclusiveness; transparency and accountability.
The Draft Policy identified key governance challenges, including algorithmic bias, data governance, cybersecurity risks, job losses due to automation, and misinformation. The application of the Draft Policy would likely have involved organisations deploying AI in South Africa:
- applying sector-specific solutions;
- ensuring their AI systems can be audited and AI decisions sufficiently explained;
- assigning clear accountability to a designated person or entity;
- maintaining robust data governance, privacy and cybersecurity controls; and
- ensuring meaningful human oversight over critical AI decision-making, including the ability to intervene in or override automated outcomes, where necessary.
The detail of what this will entail is not yet clear and will need to be provided for in the future regulatory framework. What is "sufficient" in regard to the explanation of how AI decisions are taken? What are the audit requirements? Will there be data governance, privacy and cybersecurity requirements beyond what already exists in South African law? How is responsibility shared among different parties in the AI supply chain? When will humans be required to intervene in the AI decision-making process? Will parties be able to contract out of liability? Will strict liability apply? What reporting will apply? What penalties will follow non-compliance?
Points to Ponder when publishing a revised draft AI policy for comment
Innovation as a Catalyst for Economic Growth
Although the Draft Policy expressly encouraged AI adoption and recognized its potential to drive innovation and economic growth, it was likely to disappoint AI developers.
The policy spoke largely to AI as a technology to be governed, rather than as an industry to be actively enabled. South African companies are already developing sophisticated AI systems, including large language models, and are competing in global markets beyond the traditional centers. In fact, in terms of the research conducted for purposes of the AU Continental AI Strategy, South Africa is home to the largest number of African organizations who are involved in AI innovation (with 41% of the African AI industry being start-ups). For these innovators, the Draft Policy offered limited forward-looking guidance on how innovation at the model, compute, and infrastructure layers will be supported, scaled, or incentivized.
The emphasis on risk management and safeguards, without a commensurate articulation of how AI innovation will be fostered and protected, risked reinforcing a perception of AI primarily as a regulatory concern, rather than as a high-growth commercial opportunity. A clearer, more affirmative policy signal recognizing AI as both a strategic opportunity and a domestic industrial capability would have gone some way toward addressing this imbalance.
Potential governance challenges, over-regulation and enforcement gaps
The Draft Policy rightly emphasized accountability throughout the AI lifecycle and the need to explain how AI systems reached their decisions, especially for high-risk and public sector uses.
This policy position will require the development of a regulatory framework that matches the practical complexity of AI, particularly with constantly evolving models, proprietary technology, and systems that operate across borders. This will require a balance between under- and over-regulation. From a technological regulatory standpoint, the Draft Policy aimed for AI regulation to be adaptable and flexible, providing for future review as technologies evolve while allowing room for ethical development and innovation. The goal was to enable AI technologies to develop in a less restrictive environment, potentially making South Africa an attractive destination for AI startups and investment. This approach would have provided regulators with real-time insights into emerging AI applications, helping them adapt regulations based on practical experience and observed risks.
However, there are trade-offs. This flexibility might introduce ethical and security risks, particularly if emerging issues are not identified and addressed promptly or regulated adequately. It could also create inconsistencies in AI standards across sectors, potentially reducing public trust and complicating oversight, or opening the door to over-regulation.
From an enforcement perspective, the Draft Policy assigned a central coordinating role to the DCDT. In line with government approaches in certain other sectors, it envisaged the establishment of:
- a National AI Commission;
- an AI Ethics Board;
- an AI Regulatory Authority; and
- an AI Ombudsperson.
Although a regulatory framework made up of such bodies is sensible, in principle, much is still to be developed in regard to the detail on how they will monitor, audit, and enforce compliance at scale.
The Draft Policy also made provision for an “AI insurance superfund” to compensate affected individuals in ambiguous liability scenarios. It envisaged enabling individuals to challenge AI-driven decisions and seek redress. It contemplated a code of conduct for AI professionals.
Key questions remain unanswered: Who qualifies as an “AI professional”? What constitutes non-compliance, and how is it measured? Who is an “affected person”? How will the adverse impact be established and remedied? How would compensation be quantified? Who would fund this “superfund”? Presumably, the taxpayer, but on what basis, given already strained public finances?
These questions, and others, should be answered in the regulatory framework that will follow in due course. It is imperative that the regulatory framework is drafted with the assistance of both AI and legal experts. Any new regulatory requirements must be clear and impose balanced standards on all participants in the AI ecosystem, including the government itself. Where possible, the existing regulatory infrastructure should be harnessed – for example, South Africa's implementation of POPIA and the data privacy framework has developed over time and should be leveraged.
Commercial considerations and potential cross-jurisdictional impact
Businesses deploying AI in South Africa under the Draft Policy would have needed to ensure their AI systems had auditable decision-making processes and explainable outputs. This requires an understanding of how these systems function, how data is processed, and how errors or bias are detected and corrected.
While the Draft Policy drew from the EU AI Act, UNESCO’s Ten Core Principles, and the OECD Recommendation of the Council on AI, creating potential for regulatory alignment, it also introduced the risk of implementation divergence for corporations operating across multiple countries. These corporations will likely need to assess whether their globally deployed AI systems meet South African requirements or whether localization is necessary, the latter being likely.
The impact of South Africa's pro-employee labour laws
Corporations will likely also have to consider the employment implications of their AI systems.
The Draft Policy identified workforce displacement as a key risk factor, with provisions aimed at mitigating AI-driven job losses through upskilling and reskilling initiatives.
This approach reflects South Africa’s traditionally pro-employee stance. Consequently, employers might have needed to adapt their AI implementation strategies to prevent unjustifiable displacement, a requirement that raises concerns about potential over-regulation or overreach into commercial decision-making.
Bridging the Infrastructure Gap
The Draft Policy’s ambitions depended on infrastructure that South Africa is yet to build.
Effective AI deployment requires powerful computing facilities, national data centres, fast internet, and satellite technology. South Africa’s computing capacity is limited. Outdated networks need replacing. Data costs remain among Africa’s highest, and many low-income households lack internet access. Water and electricity supply constraints continue to impede economic growth and may well do the same in regard to AI deployment.
Capacity is required to be developed at all levels of the public and private sector.
Where to from here?
The publication of the Draft Policy was a significant step towards establishing South Africa’s AI regulatory framework. Its withdrawal, however, exposed a critical gap between ambition and execution.
In line with the timelines provided for in the AU's Continental AI Strategy, the original timeline envisaged adoption of the Draft Policy during 2025/26, with National AI Policy Guidelines and regulatory requirements for high-risk AI use cases published during 2026/27, and full implementation on a phased basis through 2027/28. The withdrawal of the Draft Policy means this timeline will be delayed, potentially by a considerable margin.
Corporates should monitor developments closely, as a revised draft AI policy will no doubt be published for comment in the near future. For now, no specific AI policy or regulation exists in South Africa, and the path to one has grown longer and more uncertain. Corporations should nonetheless continue to invest in compliance readiness and responsible AI governance, guided by international best practice and existing South African law.
The commercial risk of AI is not merely a regulatory burden. The true risk lies in missing the strategic opportunities that early engagement can provide.