The Member States of the European Union have accepted the content of the provisions regulating AI, thus taking an important step towards the introduction of a comprehensive legal act aimed at regulating the functioning of artificial intelligence in the EU. It is therefore worth inspecting the content of the AI Act in its accepted form and asking the question of whether or not the proposed EU regulations will impair Europe's innovative capacity and put it in a losing position in the technological race.

In the business sector, AI contributes to increased efficiency, automation of a number of processes, improved customer service and product innovation. With a view to the prospective opportunities and benefits, it is not only the entrepreneurs, but also the consumers who emerge victorious here. The future of Europe lies in the strength of its innovation, and the AI Act is to become the basis for regulating the AI throughout the EU. It is therefore no surprise that the discussions surrounding it are underway at such an intensity.

The AI Act aims to harmonise the provisions on AI, i.e. create a robust legal framework striking a balance between innovation and the protection of EU citizens' rights which arises from the use of artificial intelligence. And so, the AI Act introduces rules for the ethical and safe use of artificial intelligence, while ensuring that AI serves the public good (i.e. it is intended to protect against the potential adverse effects of its use, regardless of whether the AI system providers are established inside or outside the Union). Importantly, the AI Act is to ensure that the freedom of science is respected and also bring about no undermining of research and development activity. In order to maintain the balance referred to above, a proportionate and effective set of binding rules is to be introduced, with their basis being formed by risk as well as its intensity and scope. Unfortunately, doing so involves a number of obligations to be imposed on the AI system operators and providers.

High-risk systems

The content of the AI Act, as forged jointly, accounts for the possible threats coming from artificial intelligence, dividing AI applications into four categories, with specific obligations being imposed depending on their classification. There are significant regulations with regard to systems classified as being high-risk, the use of which is to be authorised under strict control. Such systems are going to include, in particular:

- remote biometric identification systems, systems for biometric categorization based on sensitive or protected characteristics and systems for emotion recognition;

- AI systems used as security components in the management and operation of critical digital infrastructure;

- AI systems used to decide on the access or admission of people into education and training institutions at all levels, evaluate the learning outcomes as well as monitor and detect prohibited student behaviour during tests;

- AI systems used to recruit or select natural persons, make decisions affecting working, promotion and employment contract termination conditions, allocate tasks and monitor performance;

- AI systems used by public authorities to assess the eligibility of individuals to receive significant public benefits, assess the risk of becoming a victim of a crime, be used as a polygraph-like device, assess the reliability of evidence in an investigation and prosecution, assess the risk of a person committing a crime, or assess criminal personality traits and behaviour.

- migration, asylum and border control management.

Obligations concerning the high-risk systems

The AI Act imposes a number of obligations on the high-risk system providers and implementers which concern both how a system is designed, used and how it is monitored. Before putting an AI system into use, the high-risk system providers must, among things, undergo an appropriate procedure for assessing its conformity. In addition, the AI providers will need to introduce a quality management system which ensures compliance with the AI Act regulations and for which systematic and structured documentation exists in the form of written policies, procedures and instructions. However, the system should be proportionate to the size of the provider's organization and take into account, among others, the strategy for regulatory compliance, the techniques and procedures used in the design, development, quality control and quality assurance of a high-risk AI system, as well as risk management, post-market monitoring, the procedures for serious incident reporting and data management.

A barrier to GPAI systems?

Special regulations also apply to GPAI models (general-purpose AI models (i.e., Chat GPT, Bard, etc.), due to the demonstrated or potential range of possible applications, both intended and unintended. It is the regulations on GPAI that raise many concerns of business groups (a potential inhibition of AI development on the European market). Not surprisingly, therefore, heated discussions on the possibility of introducing less stringent regulations came about behind the negotiation scenes, with the desire for continued development of European innovation being put forward as their justification. Required information concerning GPAI is, among others, a general description of the model, the tasks it is to perform, the type and nature of systems in which it can be integrated, the acceptable use policy, interactions with software or hardware, etc. The key aspect here is supposed to be transparency and making technical documentation available to recipients.

However, it is worth realizing that the definition of artificial intelligence in the AI Act version agreed upon still leaves many doubts and ambiguities as well as concerns that the scope of understanding of AI could open up potential venues to circumvent the law. According to the AI Act, an AI system is a machine-based system, designed to operate with varying degrees of autonomy and capable of demonstrating adaptability after implementation. Such a system, for clearly defined or implied purposes, draws, from the received input data, conclusions on how to generate results such as forecasts, content recommendations or decisions that may affect physical or virtual environments, whereas the AI system's ability to conclude itself extends beyond basic data processing, thus enabling it to learn, reason or model.

Is the definition proposed as such "tight" enough to ensure these proudly promoted principles of ethical and safe use of artificial intelligence and ensure that AI does serve the public good? In such a dynamically changing reality, especially with AI undergoing such technological development, it would be worth asking the question whether, when it comes to defining AI, a better solution could be arrived at by adopting an exclusionary interpretation rather than deliberating on what constitutes such concluding.

A vision of the upcoming AGI (artificial general intelligence) creating new mathematical models is looming on the horizon. And this is what drives the direction of AI development, i.e., the development of AI capable of performing a wide range of intellectual tasks, just like the human brain does. Current models lack many skills deemed crucial to human intelligence, such as a more or less sensible reasoning. AGI may very soon be able to perform any intellectual tasks at a level matching or exceeding human capabilities (see, e.g., Q* which, as of now, can solve some mathematical problems, as highlighted by media reports). The AI Act does not impede the development of AI systems and models with a sole focus on fulfilling the research and development purposes. It is also worth realizing that AGI means the development of models aiming not only to enhance our comprehension of mathematics, but also to alter the approach to business and science, with there being a high likelihood of the effects yielded by these works becoming adoptable into other areas of our lives.

Exemptions employed by law enforcement agencies

Another area which has aroused strong emotions from the very outset relates to remote "real-time" biometric identification systems being banned from use in publicly accessible spaces for law enforcement purposes without the prior consent having been granted by a court. The AI Act does, however, introduce certain exceptions for law enforcement agencies. Exceptions to the general rules are allowed in particularly justified emergency situations, where the need to use these systems is so huge that no authorisation can objectively and effectively be granted before their use is commenced. The use of real-time biometric identification systems would be limited in time and place, serving the purpose of searching for victims, preventing a specific and present terrorist threat, locating or identifying a person suspected of having committed one of the specific crimes listed in the regulation (such as terrorism, human trafficking, illicit trafficking of drugs, psychotropic substances, human organs and tissues, weapons, ammunition and explosives, armed robbery, sabotage, etc., as well as participation in a criminal organisation involved in one or more of the aforementioned crimes).

Sanctions and the support for innovation

Under the AI Act, the EU member states will be required to establish rules on penalties and other enforcement measures for violations of the AI Act. The penalties provided for will have to be effective, proportionate and dissuasive, taking into account the interests of SMEs, including start-ups, bearing in mind their economic capacity. Therefore, the entrepreneurs should not idly watch by as the upcoming changes draw nearer, while waiting for a statutory vacatio legis to bring the AI Act into force. Violating the prohibitions on the AI-related practices referred to in the AI Act may be subject to administrative fines of up to as much as EUR 35,000,000 or, should the offender be an enterprise, up to 7% of its total annual worldwide turnover for the preceding financial year (whichever of the two is higher).

The EU regulations give rise to concerns about the potential inhibition of AI development, especially in the context of stringent requirements for technical stability and the AI providers' liability for possible damages. The introduction of structures supporting innovation and the regulations are, however, to constitute the foundation for the applicability of AI in the EU, with the AI Act aiming to ensure the maintenance of security and stability, while enabling the development and innovation of artificial intelligence. However, everyone should be aware that our future will become more and more inextricably linked to what AI has in stock for us. Taking steps towards understanding and bearing responsibility for technology may be the key to making AI civilised, and it is in the very near future that the AI Act will prove itself to be effective and helpful in carrying out this task, or not at all.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.