Artificial intelligence ("AI") has seen rapid growth in recent years. The release of ChatGPT in November 2022 and several other AI developments have created a frenzy where individuals and businesses are seeking to deploy and leverage AI in their everyday lives. However, the rate at which AI is being developed far exceeds the creation of AI regulations.

The rapid development and deployment of AI without regulation is cause for concern for many, including well-known technology experts such as Elon Musk and Steve Wozniak, who are amongst a long list of industry leaders and who signed a letter calling for a halt on AI research and development on 22 March 2023. The purpose behind the letter was to institute a freeze on AI development for six months to allow for alignment on how to properly regulate AI tools before they become even more powerful and intelligent than what they already are, and for purposes of providing legal tools and guidelines to mitigate the obvious risks associated with AI.

Many countries have already started establishing draft acts and legislation to regulate AI. The approach taken by the European Union and the United Kingdom in creating its regulatory framework for AI is highlighted below.

The European Union has taken a risk-based approach under the European Union AI Act ("EU AI Act") and plans to classify AI tools into one of the identified risk categories, each of which prescribes certain development and use requirements based on the allocated risk.

On 29 March 2023, the United Kingdom's Department for Science, Innovation, and Technology published a white paper on AI regulation ("UK White Paper"). The UK White Paper sets out five principles to guide the growth, development, and use of AI across sectors, namely:

  • Principle 1: Safety, security, and robustness. This principle requires potential risks to be robustly and securely identified and managed;
  • Principle 2: Appropriate transparency and explainability. Transparency requires that channels be created for communication and dissemination of information on the AI tool. The concept of explainability, as referred to in the UK White Paper, requires that people, to the extent possible, should have access to and be able to interpret the AI tool's decision-making process;
  • Principle 3: AI tools should treat their users fairly and not discriminate or lead to unfair outcomes;
  • Principle 4: Accountability and governance. Measures must be deployed to ensure oversight over the AI tool and steps must be taken to ensure compliance with the principles set out in the UK White Paper; and
  • Principle 5: Contestability and redress. Users of an AI tool should be entitled to contest and seek redress against adverse decisions made by AI.

In South Africa, there are currently no laws regulating AI specifically. South Africa may choose to use foreign legislation as the basis for drafting its own AI legislation, however, it is difficult to say at this early stage in the regulatory process. In as much as it may be beneficial for South Africa to base its AI regulatory framework on existing principles and legislation formulated by other countries, we suspect that South Africa will face the following challenges in respect of establishing AI regulations:

  • Data privacy: AI tools process vast amounts of data and information, and the extent to which personal information (if any) is processed remains unknown. The unregulated use of AI tools could result in the personal information of data subjects being processed without their knowledge or consent, and lead to a situation where an organisation is in breach of its obligations under the Protection of Personal Information Act ("POPIA") if its employees are not trained on the acceptable use of AI tools;
  • Cyberattacks: AI tools are susceptible to cyberattacks and there is an immediate need for the enforcement of appropriate regulations to ensure that adequate security measures are imposed on the use of AI tools. Italy recently experienced a data breach on ChatGPT and subsequently imposed a temporary ban on the use of ChatGPT in Italy, as well as a ban on the processing of personal information by OpenAI. This is an example of the ramifications of deploying AI without having an adequate regulatory framework in place.
  • Inequality and unemployment: South Africans are particularly concerned about AI tools automating jobs that would otherwise create job opportunities in the country, thus increasing the all-time low unemployment and poverty rates currently being experienced in South Africa. Our legislation will need to weigh up the advantages of the use of AI tools in the context of and against South Africa's existing challenges and determine ways in which we can use AI tools to improve our current situation. Furthermore, the issue of data bias can lead to decisions that are not equitable and serve to perpetuate existing social injustices.
  • Lack of understanding and awareness of AI: AI is technical, and the most common issue amongst rule-makers is the lack of understanding of how AI tools operate, and therefore how to safely and effectively regulate the use of such AI tools. Our rule-makers will need to consult and collaborate with technology experts to ensure that all risks are identified and addressed under South Africa's AI laws and regulations.
  • Inappropriate use: AI tools could be deployed for criminal purposes, such as money laundering, fraud and corruption, or otherwise used to promote terrorist activities. Any AI laws and regulations that are established for South Africa will need to align with the existing legislation that is currently regulating such criminal behaviour, to avoid further risks and a rise in criminal activity.
  • Accountability and recourse: South Africa's AI laws and regulations will need to be clear in respect of accountability, and provide guidelines to assist in determining who would be held accountable for adverse decisions generated by AI tools, as well as the escalation procedure for appealing or contesting an adverse AI decision.

The future of AI regulation in South Africa is unclear at this stage, however AI tools, just like any new technological developments, present real risks that should be mitigated through laws and regulations. For now, users of AI tools should be aware of the associated risks and take steps to protect themselves against those risks. ENSafrica's expert Technology, Media and Telecommunications team can assist you by identifying the risks associated with the use of AI tools and guiding you on practical ways to mitigate those risks.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.