The objective of the Artificial Intelligence Act (“AI Act”) being drafted is to ensure that the artificial intelligence systems released to the EU market and used there are safe and comply with fundamental rights. The AI Act also aims to facilitate developing the internal market and to promote competitiveness. Despite the good intentions, the end result may be quite the opposite. The so called Brussels effect came true for the General Data Protection Regulation.

Most of the developed countries can agree with the EU's view on the prohibited uses of artificial intelligence, but is the EU able to produce AI regulation that would serve as a model for the whole world? Or will heavy AI regulation in the EU lead to a situation where Europe is left behind in AI development and eventually has to adapt to standards that have formed on other markets? Furthermore, is the new regulation actually necessary? For example, the frequently discussed matter of surveillance based on facial recognition in public places has already been opposed by the European Data Protection Board within the current regulation.

Unlike before, the AI Act concerns a specific technology. For this reason, the proposal raises the question: is the Act aimed at regulating the technology or the phenomenon for which the technology is used?

A lot of critique has been raised, indicating that the existing regulation, such as the GDPR and product liability and product safety regulations, could already offer sufficient protection for applying a specific technology. The GDPR regulates automatic decision-making based on personal data. It requires that the data is relevant, representative, accurate and complete. Particularly the requirement of completeness has long been debated: is any operator able to ensure the completeness and accurateness of data in any circumstances?

The key concepts of the proposal have also been criticised as too ambiguous, narrow or overlapping with other regulations. In addition, the proposal includes rather wide but ambiguous obligations for high-risk applications, such as wide obligations concerning human oversight. However, this does not guarantee that the persons conducting the oversight can actually detect and address deviations, which has become apparent when testing the operation of self-driving cars, for example. The obligations will be expensive for those parties that want to do what is right, whereas it may turn out to be difficult to call deceitful operators to account.

Despite the critique, the AI Act is on its way, so it is advisable to start preparing for its entry into force. The most important thing at the moment is to assess which of the three AI risk categories is the one that your application belongs to: unacceptable, high-risk or unregulated. If an application under development is categorised as a high-risk application, make sure that it fulfils the requirements of the Act, such as establishing a risk management system, drafting technical documentation, automatic recording of events and ensuring human oversight. 

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.