10 March 2024

The European Union's Artificial Intelligence (AI) Act In A Nutshell

Swart Attorneys


Swart Attorneys
In the global race to regulate AI, the EU has made significant strides to lead the way when the European Parliament adopted the EU AI Act on 23 June 2023.
European Union Technology
To print this article, all you need is to be registered or login on

In the global race to regulate AI, the EU has made significant strides to lead the way when the European Parliament adopted the EU AI Act on 23 June 2023. It is not yet the end of the road for this new law, since the text must still be discussed with the other key institutions, namely the European Commission and the European Council. These informal discussions, labelled as trilogues, have started and are aimed to be completed by the end of 2023, whereafter a two-year implementation period is expected.

What are the key features of the EU AI Act?

It is a comprehensive law that builds on a set of policy documents that provides the foundation for a harmonized approach to regulating AI in the EU. Key characteristics are:

  • Ethical principles form an important foundation.
  • Protection of human rights is paramount.
  • A risk-based approach is followed in regulating AI systems.

In the public discourse about regulating AI the term 'providing guardrails' for the development and deployment of AI is often used. The EU AI Act aims to provide such guardrails by following a stricter approach to high-risk AI systems and foundational models, compared to low or no risk AI systems. Unacceptable high-risk systems such as the use of biometric identification systems for categorizing natural persons according to sensitive or protected characteristics, are prohibited.

An AI system is defined as "a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments".

The AI Act applies to the whole AI life cycle, which includes the design, development, testing, operation and monitoring of an AI system.

All operators (i.e. the provider, deployer, authorised representative, importer and the distributor) of AI systems should follow the following ethical principles:

  • Human agency and oversight;
  • Technical robustness and safety;
  • Privacy and data governance;
  • Transparency;
  • Diversity, non-discrimination and fairness; and
  • Social and environmental well-being.

This could be incorporated in a high-level ethical AI framework or used in a code of good practice to strengthen ethical AI.

No risk AI systems, such as spam filters are allowed to be put on the market without any restrictions.

As can be expected, the strictest obligations apply to the providers and deployers of high-risk AI systems, such as medical devices, and machinery. These requirements include ensuring the implementation of a risk management system, good data governance, detailed technical documentation and record-keeping, transparency, human oversight, accuracy, robustness and cybersecurity. Providers must also ensure good quality control and the creation and use of a post-market monitoring system to ensure continuous post-market compliance. Deployers of high-risk AI systems must also do a fundamental rights impact assessment before the AI system can be put into use, and in some cases also a data protection impact assessment. Small and medium enterprises are not obliged to perform a human rights or data protection impact assessment, although I suggest that they do since it is a helpful exercise that assist in compliance with the key legal requirements.

Providers of high-risk AI systems must also ensure a conformity assessment procedure is followed, to ensure full compliance with the legal requirements in the AI Act, other applicable legislation as well as harmonised standards (where it exist).

The AI Act does not only apply to providers, deployers and importers located in the EU, but also to providers of AI systems who intend to put it on the market in the EU, irrespective of where they are established. So, if you are an AI provider located in South Africa but intend to put your AI system on the market in the EU, the Act applies to you and you must appoint an authorised representative who is established within a member state of the EU.

The multi-layered legal environment of the EU means that some other laws are also relevant when assessing compliance with the EU AI Act, e.g. regulations on medical devices.

The EU AI Act sets the bar for governance of AI systems and the promotion of trustworthy and ethical AI. The technology landscape is changing rapidly, and it remains to be seen how flexible this AI Act is to deal with new technological developments.

2 August 2023

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More