ENSafrica recently hosted a webinar on a legal perspective on emerging technologies, in case you missed it, here's our roundup.

As we are all too aware, COVID-19 expedited a number of changes that were already starting to take place, and nowhere is this more evident that in the ICT space. The 4th Industrial Revolution ("4IR") is a fusion of advances in artificial intelligence ("AI"), robotics, the Internet of Things ("IoT"), genetic engineering, quantum computing, and more.

The consumer economy has been innovating on two fronts: making physical buying as "frictionless" as possible, and making e-commerce as nimble as possible. COVID-19 broke old habits and sped up that evolution. This grand experiment in remote work and distributed teams will have an impact on office life as we know it, potentially reshaping the entire "office economy". 

In our webinar we discussed the legal risks and possible mitigation steps in relation to:

  • IoT
  • AI
  • the intelligent edge
  • 5G
  • Drones

In this article we will discuss IoT and AI. Look out for our next article on the other emerging technologies.


what is IoT?

The Internet of Things describes physical objects, that are embedded with sensors, processing abilities, software, and other technologies. These physical objects then connect and exchange data with other devices and systems over the Internet or other communications networks to perform a myriad of tasks, most of which are aimed at making our lives just that much easier.

what are the legal risks?

The Consumer Protection Act, 2008 ("CPA") will be applicable, most especially in the consumer sphere, and should address the concerns around how liability should be apportioned. For example, section 56 of the CPA creates an implied warranty of the quality of goods applicable to the producer, importer, distributor and retailer. Ultimately, the apportionment of liability between the parties in the supply chain will likely remain something to be dealt with in the agreements concluded between them. The security associated with IoT devices is a well-known risk, hacking an IoT device can often be remarkably simple because all devices are connected to one network in our home. Once access has been gained to one device it can be very easy to gain access to your more sensitive devices. The Cybercrimes Act, 2020 (although not yet in force) does criminalise activities such as hacking, however it remains to be seen how this Act will be enforced in reality. IoT, in order to work as they do, require large amounts of data which are processed on various applications and most likely hosted in the cloud. This results (in most instances) to a sharing of data between third parties and across borders, which necessitates compliance with data privacy laws. Finally, developers, manufacturers and importers of IoT devices need to be mindful of the compliance burden associated with these devices, such as obtaining Type Approval and complying with the requirements of the National Regulator for Compulsory Specifications (NRCS).

what can be done to mitigate these legal risks?

In the first instance developing international standards for compatibility will go a long way in ensuring that consumers can trust the IoT devices they use and lend a level of accountability to developers and manufacturers. The security of data (as well as the device itself) is important, providers of IoT devices would of course need to ensure that they comply with all applicable laws insofar as data is concerned, in South Africa this would be the Protection of Personal Information Act, 2013 ("POPIA"). Finally, the laws around consumer protection need to be bolstered to ensure that the risks associated with IoT devices are also adequately taken into account.


what is AI?

AI has been described as "The theory and development of computer systems so that they are able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages." AI consists of a myriad of aspects, such as expert systems, speech processing, machine vision, machine learning, robotics and natural language processing, to name a few.

what are the legal risks?

The contracting relationships can be quite complex. It is important to determine who has the rights to provide the technology, what the acceptable use rights are and what the limitations are. It is also quite well known by now that the data fed into AI systems can lead to discriminatory and biased results, on top of that, using AI necessarily entails the use of big data and this again gives rise to data privacy and security concerns. Finally, while it is not strictly a legal risk, the ethics associated with the use of AI needs to be explored, should AI be allowed to make decisions about people that impacts their lives? As robots become more and more human or animal like, should they also have certain rights and what would be considered moral human behaviour directed toward them? How should robots be programmed to interact with human beings?

what can be done to mitigate these legal risks?

A number of the risks associated with AI can be mitigated during the design phase itself - privacy by design, algorithm standards and acceptable use policies will go a long way in mitigating a number of these risks. The difficulty of course arises in just how to determine what those standards should be.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.