In the dynamic realm of technological advancement, the advent of artificial intelligence (AI) heralds both promise and peril, necessitating robust regulatory frameworks to navigate its complexities. The European Union (EU) has historically shown its ambition to lead in safeguarding fundamental rights and set global precedent as can be seen from its work in protecting personal data and the supra national effect of the GDPR. It therefore comes as little surprise that the EU has worked in a comprehensive framework aimed at regulating AI and has now voted in favour of the AI Act (AIA).

On the 13th of March 2024, the EU parliament voted on the AIA, which follows the provisional agreement reached by Parliament and Council negotiators on the 8th of December 2023. The AIA is expected to enter into force in May or June 2024, with most of its provisions becoming applicable 2 years later.

In response to the evolving landscape of artificial intelligence, the European Union (EU) has enacted the AI Act (AIA), aiming to instil trust in AI technology. While AI offers immense potential, certain applications harbour inherent risks, such as opaque decision-making processes that can lead to unfair disadvantages. Existing legislation falls short in addressing these challenges adequately. The AI Act proposes targeted regulations to mitigate risks, prohibit unacceptable practices, define high-risk applications, and establish clear criteria for AI systems. It mandates conformity assessments pre-deployment, implements enforcement measures post-market introduction, and sets up governance frameworks at both European and national levels. These measures ensure responsible AI deployment, fostering trust while mitigating potential harms.

The AIA aims to establish precise obligations and criteria for AI developers and deployers concerning particular AI applications. Furthermore, legislators are also aiming to reduce administrative and financial costs for businesses incorporating AI. Margrethe Vestager (Vice-president of the European Commission) stated that the AIA focuses on what the AI is used for and how it is used rather than on the AI technology.

What is AI?

AI is a tool with limitless possibilities, hence the much-needed proposal of the AI Act and while the proposed act does not offer an explicit definition of AI itself, AI is commonly understood as the intelligence of machines or software. There is a wide spectrum of AI ranging from autonomous systems to chatbots. With this in mind the Act sets out a ranking system to place all AI systems. The Act breaks down all AI systems into 4 different categories according to their risk:

Prohibited AI Practices

Prohibited AI practices are classified into four categories, namely 1) AI systems deploying subliminal techniques; 2) AI practices exploiting vulnerabilities; 3) social scoring systems; and 4) ‘real-time' remote biometric identification systems. These practices present an unacceptable risk, and as such. these would be prohibited.

High-Risk AI Systems

High-risk AI systems include those that have a significant harmful impact on the health and safety of people in the EU. The classification of AI systems as high-risk depends on their purpose, considering the level of possible harm and its probability to occur. There are two groups of high-risk AI systems:

  • AI systems designated as safety components of products or as standalone products fall under the purview of specific EU harmonized legislation, encompassing 19 specified pieces. Primarily, if an AI system serves as a component of a product or is the product itself governed by prevailing harmonized regulations at the EU level, it is already mandated to undergo a third-party conformity assessment. Following the enactment of the AIA, these assessments would need to integrate the proposed mandatory criteria for high-risk AI systems. Products subject to such assessments include machinery, toys, lifts, medical devices, motor vehicles, agricultural and forestry vehicles, as detailed in Annex II of the draft AIA.
  • Standalone AI systems posing substantial risks to health, safety, or fundamental rights of individuals fall within the ambit of the legislation. These encompass AI systems employed in critical infrastructure management (e.g., road traffic and utility supply), education and vocational training (e.g., exam scoring), among others, delineated in Annex III of the draft AIA.

High-risk AI systems will be subject to stringent obligations before they can be put on the market. These include:

  1. Implementation of effective risk assessment and mitigation systems;
  2. Ensuring the datasets powering the system are of high quality to mitigate risks and prevent discriminatory outcomes;
  3. Logging of system activity to ensure traceability of results;
  4. Provision of detailed documentation containing all essential information about the system and its intended purpose for authorities to assess compliance;
  5. Clear and comprehensive information provided to the deployer;
  6. Incorporation of appropriate human oversight measures to minimize risks;
  7. Ensuring a high level of robustness, security, and accuracy in the system.

The AI Act proposes an ex-ante conformity assessment for high-risk AI systems, which includes those conforming to existing harmonized standards. If no such standards exist, the Commission can adopt common specifications. Conformity assessments can be conducted by an independent third party or the provider, depending on the AI system type. High-risk AI systems must undergo a new assessment if substantially modified. Exceptional circumstances may allow for market entry without assessment. Providers must report incidents breaching safety laws or fundamental rights. The Act does not apply retroactively to AI systems already on the market unless significantly modified.

AI systems which present a limited risk

Limited risk AI is associated with systems which are intended to interact with people. The AIA introduces specific transparency obligations to ensure that humans are informed when necessary that AI has been used in its efforts to nurture trust. Providers will be obliged to ensure that if AI-generated content is used such as in video or audio content, it is identifiable.

AI systems which present minimal risk

Common AI systems such as spa filters and recommender systems would fall under this last category. Providers of such AI systems may voluntarily choose to apply the same mandatory requirements of high-risk AI systems or they can just adhere to voluntary codes of conduct.

Who is bound by the AIA?

The legal framework extends its jurisdiction extraterritorially to encompass both public and private entities in instances where the AI system is either introduced to the EU market or influences individuals within the EU. All entities involved, including providers, importers, distributors, and users, would be subject to the proposed obligations. Additionally, the legal framework would encompass EU institutions, offices, bodies, and agencies when they function as either providers or users of an AI system.

Providers of high-risk AI systems would be mandated to ensure compliance with specified requirements, undergo requisite conformity assessments, and register their systems in an EU-wide publicly accessible database established by the Commission before market entry. They would also be obligated to draft an EU declaration of conformity for each AI system and maintain it for ten years after the system's introduction to the market or commencement of service. Furthermore, a “CE marking of conformity” would need to be visibly and permanently affixed to high-risk AI systems. All providers must establish post-market monitoring systems.

Providers situated outside the EU would be required to designate an authorized representative through a written mandate in cases where an importer cannot be identified. Importers and distributors would be responsible for ensuring that the provider has completed the appropriate conformity assessment procedure before making the system available on the EU market. Users would be expected to operate such systems in accordance with the accompanying instructions for use.

Importers, distributors, and users may assume the role of providers under certain conditions, including marketing a high-risk AI system under their own name or trademark instead of the original provider's, altering the intended purpose of a high-risk AI system, or making substantial modifications to it.

How is the AIA going to be governed?

The governance of the proposed Artificial Intelligence Act (AIA) places significant responsibility on EU member states, which are tasked with enforcing and overseeing the regulation's application. Each member state must appoint one or more national competent authorities to supervise its implementation, with a designated national supervisory authority overseeing overall compliance. These authorities could function as notifying bodies, responsible for designating conformity assessment bodies and conducting market surveillance. Conformity assessment bodies, upon approval, would issue certificates for compliant high-risk AI systems, valid for up to five years. Access to the source code of AI systems may be granted to market surveillance authorities upon request, subject to confidentiality obligations.

At the EU level, the proposal suggests establishing a European Artificial Intelligence Board composed of representatives from member states, the European Data Protection Supervisor, and chaired by the Commission. This board would facilitate harmonized implementation, provide guidance, and promote effective cooperation among national supervisory authorities.

Regarding market surveillance and non-compliance, authorities have the authority to evaluate AI systems posing risks and request corrective measures, withdrawal, or recalls if necessary. If non-compliance affects multiple member states, the Commission and other states must be notified, allowing for objections within a specified timeframe. Member states are also responsible for establishing penalties for infringements, ensuring they are effective, proportionate, and dissuasive. The Commission has outlined specific infringements subject to administrative fines to ensure accountability and enforcement.

Key Measures

The introduction of the AIA brings with it the establishment of AI regulatory sandboxes which allows for a more controlled environment for the developing, testing and validating of AI systems for a brief period of time prior to them being placed on the market. These sandboxes will be under direct oversight and instruction by the competent authorities to ensure compliance.

The new Coordinated Plan on AI introduces a series of joint initiatives between the Commission and EU member states with the objective of advancing EU global leadership in trustworthy AI. Key actions outlined in the plan include:

  • Establishing a European partnership focused on AI, data, and robotics to drive innovation, adoption, and acceptance of these technologies.
  • Cultivating strategic leadership in critical sectors such as climate and environment, health, robotics, and the public sector.
  • Accelerating private and public investments by leveraging EU funding available, such as the Digital Europe and Horizon Europe programs, and the Recovery and Resilience Facility.
  • Exploiting the potential of data through initiatives like launching a European alliance for industrial data, edge, and cloud technologies, and investing in European data spaces and the European cloud federation.
  • Promoting cross-sector collaboration and knowledge sharing to enhance the development and deployment of AI technologies across various industries.

Conclusion

In embracing the AI Act (AIA), the European Union (EU) charts a course toward responsible AI governance, balancing innovation with the safeguarding of fundamental rights. Through clear obligations and criteria, the AIA offers a structured framework for AI developers and deployers, ensuring transparency, risk assessment, and adherence to quality standards. With its extraterritorial reach and comprehensive scope, the Act underscores the EU's commitment to shaping the global AI landscape while promoting trust and accountability in AI technologies.

As the AIA unfolds, stakeholders must heed its provisions, fostering a culture of compliance and accountability in the AI ecosystem. With robust governance structures and cooperative frameworks, the EU stands poised to lead the world in fostering innovation while safeguarding fundamental rights in the era of artificial intelligence.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.