The European Parliament recently approved the Artificial Intelligence Act (AI Act) on March 13, 2024, which is set to establish comprehensive regulations governing the use of artificial intelligence (AI) within the European Union (EU). Much like the influential impact of the General Data Protection Regulation (GDPR) on data privacy, the AI Act aims to set a global standard for AI regulation by imposing obligations on AI systems based on their potential risks and impacts.

In this article, we'll examine the companies that will be impacted and outline the actions these businesses can expect to take.

Key Principles of the AI Act

The primary objective of the AI Act is to ensure the safety of AI systems used in the EU while upholding fundamental rights and EU values. It employs a 'risk-based' approach, meaning that the level of regulation varies depending on the potential harm posed by the AI system.

What is the AI Act's definition of an AI system?

The AI Act adopts the definition of an AI system from the Organisation for Economic Co-operation and Development (OECD), aiming to foster international uniformity and alignment. As per the AI Act, an AI system is defined as follows:

An AI system is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The AI Act highlights a distinguishing feature of AI systems, which is their capability to infer. It specifies that the methods facilitating this include machine learning techniques, as well as logic- and knowledge-based approaches, which infer from encoded knowledge or symbolic representations of tasks. This specific ability of AI systems extends beyond simple data processing, enabling functions such as learning, reasoning, and modelling.

Who is affected?

The AI Act applies to various operators involved in the development, distribution, and use of AI systems within the EU. This includes providers, importers, distributors, manufacturers, authorised representatives and deployers of AI systems (which are defined as natural or legal persons using AI under their authority in the course of their professional activities).

The AI Act also has broad extraterritorial applicability. This means that organisations situated outside the EU could still be subject to the regulations if they introduce or deploy AI systems in the EU, or if the outputs generated by these AI products are used by individuals within the EU.

Simply put, if your organisation is "placing" (first making available), "making available" (supplying for distribution or use in the course of a commercial activity, whether for a fee or free of charge), or supplying to a user an AI system in the EU market, or making the results generated by an AI product accessible in the EU, then your organisation will be affected by the regulation.

Nevertheless, there are circumstances where the AI Act does not apply. These exceptions include:

  • AI systems developed and utilised solely for scientific research and development;
  • research, testing, and development activities conducted for AI systems before they are introduced to the market or utilised;
  • AI systems released under free and open source licences unless they are placed on the market or put into service as high-risk AI systems;
  • public authorities in non-EU countries and international organisations if they have law enforcement and judicial cooperation agreements with the EU, as long as adequate safeguards are in place; and
  • AI systems used for purposes outside the scope of EU law-making authority, such as military or defence.

What Is the EU Approach to AI Regulation?

The AI Act operates on a risk-based approach, meaning that varying requirements are imposed depending on the level of risk involved.

  • Unacceptable risk: unacceptable risk refers to scenarios where any AI system posing such risk is effectively banned. This includes AI systems that pose clear threats to citizens' rights. Examples include biometric categorisation systems based on sensitive characteristics, untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases, social scoring based on social behaviour or personal characteristics, certain predictive policing applications, and AI that manipulates human behaviour or exploits vulnerabilities.
  • High risk: high-risk AI systems, due to their potential to cause significant harm to health, safety, fundamental rights, the environment, democracy, and the rule of law, are not automatically banned, but come with strict obligations. Examples of high-risk AI applications include those used in critical infrastructure, education, employment, essential public and private services (e.g., healthcare, banking), certain systems in law enforcement, migration and border management, as well as justice and democratic processes (e.g., influencing elections). These systems must undergo risk assessments and mitigation measures, maintain usage logs, ensure transparency and accuracy, and incorporate human oversight. Citizens will also have the right to lodge complaints and receive explanations concerning decisions made by high-risk AI systems that affect their rights.
  • Limited risk: for AI systems categorised as having limited risk, providers are required to ensure that systems intended for direct interaction with individuals are designed and developed in a manner that informs users of their interaction with an AI system. This category also encompasses chatbots, AI used in marketing, design, manufacturing, retailing processes, trend analysis, and personalisation.
  • Minimal risk: minimal-risk AI systems, like AI-enabled video games or spam filters, are not subject to restrictions under the AI Act. However, companies have the option to adhere to voluntary codes of conduct even for these minimal-risk AI systems.

Transparency requirements

Transparency requirements mandate that General-purpose AI (GPAI) systems and their underlying models adhere to specific guidelines. This includes compliance with EU copyright law and the publication of comprehensive summaries detailing the content used for training. Moreover, more potent GPAI models, which may pose systemic risks, will face additional obligations such as conducting model evaluations, assessing and mitigating systemic risks, and reporting incidents.

Penalties

Fines are delineated as a percentage of the global annual turnover or a fixed amount, with caps for SMEs and start-ups. Depending on the violation's nature, fines can extend up to €35 million or 7% of the global annual turnover. Within the act, any individual can make complaints about non-compliance.

Timeline

The European Council is expected to formally approve the finalised version of the AI Act in April 2024. Following linguistic revisions, the law will be published in the Official Journal of the EU and become effective 20 days after publication. Organisations will then have a window of 6 to 36 months to ensure compliance with the AI Act, depending on the nature of the AI systems they develop or deploy.

From the outset, companies can take the following measures:

  • Understand the geographical scope of their AI products and assess if they can be extended to the EU. If needed, explore strategies to limit the application of the AI Act, such as implementing geo-blocking techniques to prevent their AI products (or their outputs) from being used in the EU.
  • Create an inventory of their existing and potential AI systems and assess if they fall under the scope of the AI Act.
  • Categorise in-scope AI systems to determine their risk classification and identify applicable compliance requirements.
  • Understand their position in relevant AI value chains, including associated obligations, and ensure integration of these responsibilities throughout the AI systems' lifecycle.
  • Consider other implications, such as interaction with other EU or non-EU regulations (e.g., data privacy laws) and potential opportunities.
  • Develop and implement a plan to establish appropriate accountability and governance frameworks, risk management systems, quality controls, monitoring procedures, and documentation for compliance with the AI Act before it comes into force.

How can Logan & Partners help?

We are well-equipped to support businesses seeking to comprehend its obligations under the AI Act and to implement preparatory and compliance measures for compliance.

We strongly advise all businesses potentially affected by the law to begin preparations to prevent unexpected challenges. We're prepared to assist in assessing your business for AI Act applicability and addressing any queries. Get in touch with us for a free consultation.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.