After much negotiation and dialogue (or trilogue rather between the European Parliament, Council and Commission), the European Union reached political agreement on the Artificial Intelligence ("AI") Act (the "Act") on 9 December 2023. But what has been agreed and what do businesses need to know? While the final text of the Act is not yet a fait accompli, this article considers some of the key questions that your organisation may have and the answers, which are now becoming clearer, following publication of recent press releases by the European Parliament and Council, and FAQs by the European Commission.

How will the Act apply to my organisation?

The Act will apply to public and private organisations inside and outside the EU, that place an AI system on the EU market or where its use affects people located within the EU. It concerns both providers (i.e. the developer of the system) and deployers (i.e. the entity that acquires and implements the system). It does not apply to private, non-professional uses. The Act takes a risk-based approach in order to determine applicable obligations – that being the risk to the health, safety and fundamental rights of citizens under the EU Charter of Fundamental Rights.

What are the risk categories?

There are four risk categories of system to which the new AI rules will apply:

  • Minimal risk. It is expected that the vast majority of AI systems will fall within this category. Examples include AI-enabled recommender systems, spam filters and inventory management. These systems will not be subject to any new rules under the Act. Rather providers of these systems may choose voluntarily to apply the requirements for trustworthy AI and adhere to voluntary codes of conduct.
  • High Risk | Systems that have the potential to adversely impact on peoples' safety or fundamental rights. AI systems identified as high risk must comply with strict rules on risk mitigation, maintenance of high quality data sets, documentation, user information, human oversight, accuracy, cybersecurity and other areas. Annex III to the Act contains a list of use cases which are considered to be high risk. The European Commission will keep this list under review as AI continues to evolve. In addition, the Act contains a methodology to help organisations to assess whether a system is high risk. In line with existing EU product safety legislation, it focuses on the function performed by the AI system, and the specific purpose and modalities for which it is used. Examples of high risk systems include certain critical infrastructures, for instance in the field of water, gas and electricity; medical devices; filtering candidates in recruitment or education admissions; certain law enforcement systems; border control and democratic processes. Other examples include certain biometric identification / categorisation and emotion recognition systems.
  • Unacceptable Risk | These systems will be prohibited. A limited set of harmful uses of AI that contravene EU values because they violate fundamental rights will be prohibited. Examples include AI systems that manipulate human behaviour to circumvent the free will of users (for example, toys using voice assistance to encourage children towards dangerous behaviour) and systems used for 'social scoring' and individual predictive policing. In addition, certain biometric systems will be prohibited, such as emotion recognition systems in the workplace, untargeted scraping of the internet or CCTV for facial images to build databases, and real time remote biometric identification in public spaces for law enforcement purposes (with narrow exceptions).
  • Specific Transparency Risk | Systems posing a limited risk, requiring compliance with specific transparency obligations. For certain AI systems specific transparency requirements are imposed, for example where there is a clear risk of manipulation. The now familiar examples here are AI systems like chatbots and deep fakes. Users must be made aware of the fact that they are interacting with a machine and be informed if biometric categorisation or emotion recognition systems are used. Providers of these systems must design them so that synthetic audio, video, text and images content is marked in a machine-readable format detectable as artificially generated or manipulated.

In addition, the Act recognises and introduces rules in relation to 'systemic risks' posed by AI systems that can be used for a multitude of purposes, known as 'general purpose AI' (which includes large generative AI models). This is in recognition of the tremendous harm that such systems could cause if misused deliberately or accidentally.

Does the Act contain any provisions that foster innovation?

The provisions in support of innovation (and in support of promoting evidence-based regulatory learning) have improved from those which were initially proposed by the European Commission. In particular, the Act makes provision for 'regulatory sandboxes' (a controlled environment provided by national regulators for developing and testing AI systems in real world conditions with specific safeguards) for organisations to facilitate compliant innovation and development.

What other new assessments are being introduced?

Providers of AI systems will have to conduct a 'conformity assessment' before placing high-risk systems on the market – the purpose of which is to demonstrate the conformity of the system with the requirements for trustworthy AI (e.g. data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity and robustness). Additionally, deployers that are bodies governed by public law, or private operators providing public services, and operators of high-risk systems will be required to conduct a 'fundamental rights impact assessment'. This assessment will consider the impact of the use of an AI system on fundamental rights. Deployers and operators will be obliged to notify the supervisory authority of the outcome of the assessment.

How are general-purpose AI ("GPAI") models being regulated?

Providers of GPAI models will be obliged to disclose certain information to downstream system providers. These transparency obligations are aimed at enabling a better understanding of these models. In addition, model providers will be required to have policies in place to ensure that they respect copyright law when training their models. More stringent obligations will apply to high-impact GPAI models with systemic risk. Providers of these models will be mandated to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the European Commission on serious incidents, ensure cybersecurity and report on their energy efficiency. Until harmonised EU standards are published, GPAIs with systemic risk may rely on Codes of Practice to comply with the Act.

What are the penalties for non-compliance?

The consequences for non-compliance are potentially very severe. While the European Commission has indicated that more proportionate caps may apply for SMEs and start-ups, organisations that are found to have not complied with the Act will be liable for a fine, which will be the higher of:

  • Up to €35 million or 7% of global annual worldwide turnover for violations of prohibited practices or non-compliance related to requirements on data;
  • Up to €15 million or 3% of global annual worldwide turnover for violations of other obligations, including the rules on GPAI models; and
  • Up to €7.5 million of 1.5% of global annual worldwide turnover for supplying incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request.

What can individuals do that are affected by a violation of the Act?

A natural or legal person will have also a right to lodge a complaint with a national supervisory authority about any alleged non-compliance with the Act and their complaint will be handled in line with the dedicated procedures of that authority. Separately, the AI Liability Directive aims to provide individuals seeking compensation for damage caused by high-risk systems with effective means to identify potentially liable persons and obtain relevant evidence for a damage claim.

How will the AI Act be enforced?

Member States will be required to designate one or more competent national supervisory authorities to supervise application and implementation of the Act, as well as carry out market surveillance activities. An European AI Board, consisting of representatives from national supervisory authorities, will play an important role in facilitating the smooth, effective and harmonised implementation of the Act. The Board will issue recommendations and opinions to the European Commission regarding high-risk AI systems and other will aspects of the new rules. An advisory forum for stakeholders, such as industry representatives, SMEs, start-ups, civil society, and academia, will be set up to provide technical expertise to the European AI Board.

In addition, the European Commission will establish a European AI Office which will supervise and enforce the new rules for GPAI models, and contribute to fostering standards and testing practices. A scientific panel of independent experts will advise the European AI Office about GPAI models, by contributing to the development of methodologies for evaluating the capabilities of foundation models, advising on the designation and the emergence of high impact foundation models, and monitoring possible material safety risks related to foundation models.

When will the Act come into force?

Whilst the European Parliament, Council and Commission have reached a political agreement on the Act, the text of the Act has not yet been finalised. There is some time pressure to finalise the text and publish the Act in the EU's Official Journal before the European Parliament elections in June 2024. Once published in the Official Journal, the Act will enter into force 20 days later. It will be fully applicable 24 months after entry into force, with a graduated approach as follows:

  • 6 months after entry into force, the rules on prohibited use will become effective;
  • 12 months after entry into force, obligations for GPAI governance become applicable;
  • 24 months after entry into force, all rules of the Act become applicable including obligations for high-risk systems defined in Annex III of the Act (list of high-risk use cases); and
  • 36 months after entry into force, obligations for high risk systems defined in Annex II of the Act (list of Union harmonisation legislation) will apply.

What do businesses need to do now?

Once the final text of the Act is published, organisations should start taking steps to assess the impact of the Act on their business. Significant work may be required to ensure your business is ready for the relevant implementation deadlines, and understands what specific compliance processes, assessments and documentation it will need to roll out. The first step will be assessing the applicability of the Act to your business. It will then be crucial that your business establishes an appropriate AI governance programme and processes to comply with the new rules.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.