ARTICLE
19 August 2024

A General Look Into EU AI Act

GE
Gunay Erdogan Attorneys-at-Law

Contributor

Gunay Erdogan Attorneys-at-Law is a full-service law firm with offices located in Istanbul and Ankara. Our firm was established in 2020 by the partners who bring unique and extensive experience in their fields. Our team of 20 highly skilled lawyers provide legal services in English, Turkish, French and German. We are committed to utilizing legal technologies to optimize the legal services, enabling us to provide cutting-edge legal service that meets the 4.0 standards of legal service.
Technology, AI, its techniques and its consequences might continue to evolve in the future, but the European Union seems ready to face it and to make sure AI is safe for everyone.
Turkey Technology

Introduction

Artificial Intelligence (AI), in the past few years, has drastically transformed the world with its advanced tech capabilities. AI can influence public content, capture and analyze data from faces to enforce laws or personalize advertisements and are used to diagnose and treat cancer. AI is omnipresent in our day-to-day lives, from healthcare, finance, and trade to automobiles, telecommunications and education. In other words, AI affects many parts of our life and makes it easier.

Artificial intelligence is a technology that impresses all legal orders, both national and supranational. In view of this ever-growing industry that is technology, the creation of law should always follow. The situation is such that AI, its functioning, its workings, its implementation are yet unregulated, and we urgently need standards. Artificial intelligence raises numerous issues regarding individual health, fundamental rights, safety, and autonomy to which the law will have to respond. This is the subject of a new legal instrument that the European Union (EU) published weeks ago: the EU AI Act.

First, what is AI?

"Machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."1. AI has many purposes and falls into many categories; it's used in personal item recommendations on Amazon, in beautiful and fun filters on Instagram and Snapchat, in real-time public transport timetables or when trying to translate a three-page text.

AI allows us to be more efficient by executing difficult, boring, or long tasks but let us not forget that AI is very risky. Here is a non-exhaustive list of risk it represents:

  • Bias due to algorithms and data fed to systems
  • Computation errors and results errors
  • Cyberattacks
  • Opacity of AI which acts like a black box by hiding the internal workings of the system
  • Replacement of some jobs and its socio-economic consequences

EU AI Act

Aware that AI is a rapidly evolving family of technologies, contributing to a wide range of economic benefits, environmental and societal impacts affecting all economic sectors and social activities, the European Union decided to enact a regulation. The EU AI Act is the first binding act on artificial intelligence. This regulation is intended to apply in many sectors to federalize standards to avoid fragmentation by numerous national laws, to stimulate the positive, to reduce and manage risks, and to adopt best practices on the subject. The regulatory approach is human-centered, and the strategy of this project is to gradually establish rules based on the risks that artificial intelligence systems or models represent. Thus, the greater the risk, the more obligations there will be.

Initially proposed by the European Commission in April 2021, the EU AI Act has undergone many changes until it was published by the Official Journal of the European Union on August 1st.

The Act is enacted in respect of the previously published regulations. Therefore, it enforces data protection laws, competition laws, consumer protection laws and worker rights for AI systems. These are not neglected by the Act.

The EU AI Act regulates providers, deployers, importers, distributors, and manufacturers within the EU. The AI Act divides AI systems into groups based on the risk the system poses; from the prohibited one to the minimal one. There are:

  • Unacceptable risks attached to AI practices are simply prohibited. This concerns, for example, biometric image captures with the aim of creating or expanding facial recognition databases, social scoring systems, 'real-time' remote biometric identification in public spaces for law enforcement, prediction of a criminal offense based on personality via AI systems. For example, Chinese social scoring system which gives grant to some areas and opportunity to its people depending on their score is prohibited by the Act.
  • High-risk AI systems2 when they present risks to the health or safety, or to the fundamental rights, of individuals3>. This is themost regulated category because it can be uncertain and dangerous. Before these products are placed on the market, the EU requires these systems to comply with their rules. This category seems to include a wide range of applications; therefore, the AI Act specifies that compliance with European harmonized standards will grant high-risk AI systems providers a presumption of conformity, with a post-market implementation of monitoring and corrective actions if necessary. The regulation requires the provision of technical documentation, constant evaluations of the proper functioning of the system and above all a good level of accuracy, robustness and cybersecurity. This includes people assessment AI system for recruitment or AI system assessing the security risk of a person or the examination of applications for asylum or visa.
  • Limited/minimal risk AI systems4, particularly those designed to interact with humans or generate content. They are seen as a threat because of the risk of impersonation, copyright or deception. This category concerns customers service chatbots, video games bots, or even Apple's Siri. These systems must adhere to specific information and transparency requirements that the EU considers minimal. This includes deployers informing the user when they are interacting with chatbots, disclosing when the multimedia content is artificially generated or manipulated, with exceptions only in limited cases (such as crime prevention). Plus, providers of AI systems generating large amounts of synthetic content must use reliable, effective, and robust techniques to prevent and detect when the output is AI-generated or manipulated.

Unacceptable risks

This concerns, for example, biometric image captures with the aim of creating or expanding facial recognition databases, social scoring systems, 'real-time' remote biometric identification in public spaces for law enforcement, prediction of a criminal offense based on personality via AI systems. For example, Chinese social scoring system which gives grant to some areas and opportunity to its people depending on their score is prohibited by the Act.

These AI practices are prohibited.

High-risk

They present risks to the health or safety, or to the fundamental rights, of individuals5. This is the most regulated category because it can be uncertain and dangerous. Before these products are placed on the market, the EU requires these systems to comply with their rules. This includes people assessment AI system for recruitment or AI system assessing the security risk of a person or the examination of applications for asylum or visa.

The regulation requires the provision of technical documentation, constant evaluations of the proper functioning of the system and above all a good level of accuracy, robustness and cybersecurity. This category seems to include a wide range of applications; therefore, the AI Act specifies that compliance with European harmonized standards will grant high-risk AI systems providers a presumption of conformity, with a post-market implementation of monitoring and corrective actions if necessary.

Limited/minimal risk

Some AI systems, particularly those designed to interact with humans or generate content, are seen as a threat because of the risk of impersonation, copyright or deception. This category concerns customers service chatbots, video games bots, or even Apple's Siri.

These systems must adhere to specific information and transparency requirements that the EU considers minimal. This includes deployers informing the user when they are interacting with chatbots, disclosing when the multimedia content is artificially generated or manipulated, with exceptions only in limited cases (such as crime prevention). Plus, providers of AI systems generating large amounts of synthetic content must use reliable, effective, and robust techniques to prevent and detect when the output is AI-generated or manipulated.



Next to that, there are the general-purpose AI models. They just appear to be in the audience and harmless at first, but it is important to monitor their impacts as they can be considered high risk depending on the circumstances.

What is a General-Purpose AI (GPAI)?

"Systems based on a GPAI model, which have the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems"6. It means that it is fed with a large amount of data using self-supervision and can be integrated into a variety of downstream systems or applications such as Chat GPT.

When needed, providers of these services must present kept up-to-date technical documentation, their copyright compliance with Union copyright law and transparency in Training Data, including publicly releases of detailed summary of the training data used for their models.

The AI Act creates the European Office for Artificial Intelligence and calls for a code of conduct in parallel. Other obligations are then pending for its recipients.

However, some exemptions exist for: AI systems developed or used exclusively for military purposes, AI systems used by public authorities or international organizations in non-Union countries when used for law enforcement or judicial cooperation with the EU, AI systems developed and used for the sole purpose of scientific research and discovery, AI systems in the research, testing, and development phase before being placed on the market or put into service (this includes free and open-source AI components) and people using AI for personal use. Free and open-source AI may have exemptions under the Act's provisions.

What's Next?

After being signed by the presidents of the European Parliament and of the Council, the legislative act was published in the EU's Official Journal on July 12, then:

  • 20 days later: enters into force (August 1st)
  • In 6 months: prohibited systems must be removed
  • In 12 months: the provisions concerning GPAI and penalties will apply
  • In 24 months: obligations for high-risk AI systems take effect, covering critical areas such as AI in recruiting, biometrics, and critical infrastructure.
  • In 36 months: All AI Act provisions come into effect across all risk categories, requiring full compliance with the regulation.

Companies must assemble their compliance team to meet these requirements. It will require strong collaboration and for some to go beyond confidentiality shadowing some components behind AI. There is still some time left for them to make operational adjustments, risk assessment, and to train their staff. The official artificial intelligence website of the European Union has made available a compliance checker, allowing companies to understand on what points the AI Act concerns them and giving insights on what policies to adopt.

Concerning fines, the standard is high. For non-compliance prohibition, the penalty goes from €35 million or 7 % of the company's annual turnover (concerning the use of prohibited AI) to €15 million or 3% of the company's total turnover (concerning the use of GPAI). The act will penalize businesses that give incomplete or misleading information with a fine of €7.5 million or 1.5% of the company's annual turnover7.

However, the AI Act has a lower effect on start-ups and medium-sized enterprises (SMEs). It aims to offer start-ups and SMEs to develop and train AI models before their release to the public. Consequently, national authorities must provide sandboxes (testing environment) for companies to still promote innovation and to simulate conditions close to the real world8.

Critics

The EU AI Act still raises criticism and concerns. Firstly, human rights associations criticized the EU's decision to not ban public mass surveillance and considered it as dangerous. Tech companies underlined too broad concepts and definitions mentioned in the Act such as the definition of AI or the "human vulnerability" that will be let to interpretation by Courts. Finally, some companies including Meta, are complaining that EU cut advancement and will make them flee to the US where regulations are more lenient. The risk is that the AI Act is overruling the sector and will find itself out-of-date soon.

Conclusion

To sum up, AI is a revolutionary tool that improves our day-to-day lives drastically, but the sector needed a framework because AI technology was largely unregulated. The EU delivered it. This act represents a key moment for the technology industry. By setting such standards, European law aspires to be a guide on this technology for the States it oversees with the AI Act. Policymakers found a way to keep up with the pace of innovation while respecting fundamental rights, the market and democracy. The EU AI Act can shape the future of AI not just in the EU but globally as it might set a precedent for other countries to follow. Regardless, whether applied to a provider, developer or implementer of AI technology, the AI Act has the potential to significantly change the way companies operate within the EU and globally. Companies have now to align, to make the technology sector ideal.

Technology, AI, its techniques and its consequences might continue to evolve in the future, but the European Union seems ready to face it and to make sure AI is safe for everyone.

Footnotes

1. Article 3, 1) AI Act.

2. Articles 6 & 7 AI Act.

3. Article 79, §1 AI Act.

4. Article 52 AI Act.

5. Article 79, §1 AI Act.

6. Article 3, 63) AI Act.

7. Article 99 AI Act.

8. Article 57 AI Act.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More