- within Technology topic(s)
- within Technology, Compliance and International Law topic(s)
The European Artificial Intelligence Act in 2026
In June 2024 the European Parliament and European Council have adopted the Regulation (EU) 2024/1689, which will come in force in August 2nd, 2026.
ChatGPT, and similar generative AI systems will not be classified as high-risk, however, will have to comply with transparency requirements and EU copyright law.
Introduction
In light of the rapid technological developments and the increased dependence on tools such as artificial intelligence, there was also a constant need to lay down certain rules which will, theoretically, guarantee safety and transparency on one hand and impose compliance measures on the other. The said legal framework ensures that the AI is safe, ethical, non-discriminatory and innovative.
Classification of Risks
The Act classifies AI according to the risk:
- Unacceptable Risk (Prohibited): Systems causing harm, such as manipulative AI of behaviour or indiscriminate biometric scraping
- High Risk (Regulated): strict assessments and transparency (i.e. healthcare, education, law enforcement)
- Limited Risk (Lighter Transparency): Requirements for transparency, such as obligation from developers and deployers to disclose that content is AI-generated and end-user interacts with AI.
- Minimal Risk (Unregulated): covering most AI applications, allowing free use.
What is prohibited under the Act:
The following types of AI system are banned:
- deploying subliminal, manipulative, or deceptive techniques to distort behaviour and impair informed decision-making, causing significant harm.
- exploiting vulnerabilities related to age, disability, or socio-economic circumstances to distort behaviour, causing significant harm.
- biometric categorisation systems inferring sensitive attributes, except labelling or filtering of lawfully acquired biometric datasets or when law enforcement categorises biometric data.
- social scoring, i.e., evaluating or classifying individuals or groups based on social behaviour or personal traits.
- assessing the risk of an individual committing criminal offenses solely based on profiling or personality traits.
- compiling facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage.
- inferring emotions in workplaces or educational institutions, except for medical or safety reasons.
- real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement, except when: a) searching for missing persons, abduction victims, and people who have been human trafficked or sexually exploited; b) preventing substantial and imminent threat to life, or foreseeable terrorist attack; c) identifying suspects in serious crimes.
What is considered as 'High risk' under the AI Act:
The following AI systems are classified as high risk (divided in two categories):
- AI systems that are used in products governed under the European Product Safety laws (i.e. toys, aviation, cars, medical devices etc)
- AI systems falling into specific areas that will have to be
registered in an EU database:
- Management and operation of critical infrastructure
- Law enforcement
- Assistance in interpretation and application of the law
- Migration, asylum and border control management
- Employment, worker management and access to self-employment
- Access to essential private services and public services and benefits
- Education training
It is noted that the high-risk AI providers should implement a risk management system and conduct effective data governance in order to ensure that their databases are relevant and minimize errors. In addition, the providers should have in place technical documentation and be able to timely demonstrate compliance with the regulatory framework.
Any high-risk AI systems should be assessed prior released, whereas assessment should also be taking place throughout their lifecycle.
Lastly, when it comes to the most popular AI database, ChatGPT, it was noted that ChatGPT, and similar generative AI systems will not be classified as high-risk, however, will have to comply with transparency requirements and EU copyright law. In particular they should:
- Disclose that the content was AI generated
- Ensure that the system will not generate illegal content
- Publish summaries of copyrighted data used for training
AGPLAW | A.G. Paphitis & Co. LLC
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
[View Source]