ARTICLE
11 May 2026

Rebuilding Trust In AI Policymaking: South Africa’s Draft AI Policy Withdrawal

AA
Adams & Adams

Contributor

Adams & AdamsĀ is an internationally recognised and leading African law firm that specialises in providing intellectual property and commercial services.
The recent withdrawal of South Africa’s Draft National Artificial Intelligence (AI) Policy by the Minister of Communications and Digital Technologies, Mr Solly Malatsi...
South Africa Technology
Adams & Adams are most popular:
  • within Cannabis & Hemp topic(s)

The recent withdrawal of South Africa’s Draft National Artificial Intelligence (AI) Policy by the Minister of Communications and Digital Technologies, Mr Solly Malatsi, marks a significant and sobering moment in the country’s digital policy journey. The decision followed a week ending 25 April 2026 in which the Department of Communications and Digital Technologies (DCDT) faced intense backlash and sustained criticism over serious defects in the Draft Policy, culminating in confirmation that the document contained fictitious or fabricated academic references. According to the Minister’s official statement, issued in the evening of 26 April 2026, these flaws, most plausibly arising from the unverified use of AI‑generated material undermined the integrity and credibility of the Draft Policy to such an extent that its withdrawal was unavoidable.

The Irony of AI Governance Failure

At the centre of the controversy is not the use of AI per se, but a far more fundamental governance failure, the absence of rigorous verification and quality‑assurance controls in the development of a national policy instrument. The disclosure that a Draft National AI Policy relied, in part, on sources that do not exist or cannot be validated raises serious concerns about institutional drafting standards, evidentiary discipline, and accountability.

The irony is particularly acute. The Draft Policy sought to establish South Africa as a principled and ethical leader in AI governance, yet its credibility was compromised by reliance on unverified material presented as authoritative reference sources. As the Minister made clear, this was not a minor drafting oversight or formatting error; it was a substantive failure that struck at the reliability of the policy itself.

This episode underscores a critical truth about AI governance: sound policymaking depends first on sound institutional processes. The sophistication of a policy’s ambitions cannot compensate for weaknesses in verification, peer review, and human oversight.

Negative Impacts of the Withdrawal

Although the withdrawal was the only defensible decision, it carries meaningful consequences. It is clear that that this withdrawal comes with policy details and implementation setbacks. The Draft National AI Policy was designed to serve as the cornerstone of South Africa’s AI governance framework. Its withdrawal delays the finalisation of a national AI policy; the development of sector‑specific AI guidelines; and the establishment of anticipated governance structures, oversight mechanisms, and ethical frameworks. In a global environment where AI policy is rapidly moving from conceptualisation to enforcement, this delay risks widening South Africa’s regulatory and competitiveness gap.

In the absence of a dedicated AI policy, organisations across the public and private sectors must continue operating under a fragmented legal landscape, relying on general instruments such as POPIA, IP law, procurement rules, and sectoral regulation. This uncertainty complicates responsible AI deployment and increases compliance risk.

Significant time, public resources, and stakeholder input were invested in the Draft Policy. Its withdrawal represents not merely a reputational setback, but also a loss of momentum at a time when AI readiness, skills development, and institutional capacity building are critical.

Erosion of Public Confidence and Institutional Authority

Perhaps the most serious consequence is the damage to public trust. The DCDT is the institution entrusted with leading South Africa’s digital transformation and AI governance agenda. The acknowledgement that a foundational national policy contained hallucinated or fictitious references inevitably raises concerns about internal governance, quality‑assurance processes, and institutional AI literacy. As the Minister noted, South Africans “deserve better”. Trust in AI governance depends not only on policy ambition, but on the credibility of the institutions that design and implement those policies. Restoring confidence will require visible, deliberate corrective action.

Recommendations: Policy Fixes and Institutional Recovery

While the withdrawal is disappointing and could have been prevent, this moment should not be treated as a failure, but as a reset opportunity. The following measures are critical:

  • Reinforce Policy Drafting and Verification Standards

Before any revised Draft AI Policy is released, it is crucial that all references, citations, and external authorities should be subjected to formal human verification protocols. The AI‑assisted drafting tools must operate under mandatory human intervention controls, with accountability clearly assigned to identified officials. Furthermore, the Draft policies should undergo independent expert peer review prior to gazetting.

  • Separate AI Use from AI Governance

A clear institutional distinction must be drawn between using AI as a productivity tool and governing AI as a societal risk. The DCDT should issue internal rules limiting the unverified use of generative AI in policy and regulatory drafting and adopt an internal AI use and AI risk management policy applicable to officials and contractors.

  • Re‑anchor the Policy in Verifiable Legal Foundations

It would be ideal for the revised policy to demonstrably rest on South Africa’s constitutional framework and existing statutory instruments; as well as clearly validated international benchmarks and instruments, transparently cited and contextualised.

  • Adopt a Phased, Credibility‑First Policy Process

It is suggested that, instead of rushing for a replacement Draft AI Policy, The DCDT should first publish a short corrective policy note acknowledging lessons learned. The DCDT must then conduct targeted consultations focused on governance, readiness, and implementation capacity. Only thereafter should a revised Draft National AI Policy be released for public comment.

  • Restore Trust Through Transparency and Accountability

Public confidence can only be rebuilt through openness which may include clear communication on what went wrong, why it occurred, and how controls have changed, which the DCDT has done partly; visible consequence management and institutional reform; and ongoing reporting on policy redevelopment milestones.

A Reset That Must Lead to Reform

The withdrawal of the Draft National AI Policy is undoubtedly a setback, but it is also a critical teachable moment. It demonstrates, in real terms, the dangers of poorly governed AI use and reinforces why institutional discipline, human oversight, and accountability are non‑negotiable in AI governance.

The DCDT is applauded for responding with transparency on the episode that may yet strengthen South Africa’s AI policy foundations. The department must take steps into reform and improved governance rigour. Until then, organisations would be wise to treat this moment as a warning, AI governance begins with governing one’s own use of AI.

Organisational AI Readiness: Acting Without Waiting

The withdrawal also highlights a broader reality that AI governance cannot wait for perfect policy. AI systems are already embedded in business operations, professional services, and public administration. Organisations must take responsibility for ensuring lawful, ethical, and defensible AI use now.

In this context, offerings such as SmartAIIP, developed by the SmartAIIP team at Adams & Adams, are particularly relevant. SmartAIIP assists organisations in:

  1. Assessing AI readiness across legal, governance, IP, data protection, and risk domains;
  2. Identifying exposure to issues such as hallucinated outputs, data misuse, and compliance gaps; and
  3. Implementing practical, proportionate AI governance guardrails.

The SmartAIIP Readiness Test provides a structured, diagnostic entry point for organisations seeking to understand whether their AI use would withstand regulatory, ethical, and reputational scrutiny especially in a policy environment still in flux.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More