Practical Aspects Of Application Of EU Regulation On AI

To understand the implications of the Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence and Amending Certain Union Legislative Acts.
European Union Technology
To print this article, all you need is to be registered or login on

To understand the implications of the Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts: {SEC(2021) 167 Final} - {SWD(2021) 84 Final} - {SWD(2021) 85 Final} (AI Act), companies should first assess whether they have artificial intelligence systems in use and under development, or intend to acquire such systems from third-party vendors and list identified artificial intelligence systems in a model repository. Many financial services organizations can leverage existing model repositories and surrounding model management and add artificial intelligence as an additional topic.

Organizations that have not needed a model repository to date should start by evaluating the status quo to understand their (potential) exposure. Even if artificial intelligence is not currently in use, it is very likely that this will change in the coming years. Initial identification can start with an existing software catalog or, if one is not available, with surveys sent to various business units.

Artificial intelligence systems are classified by risk. The AI Act distinguishes between four categories of risk: unacceptable, high, limited and minimal.

The law specifies examples of systems posing unacceptable risks. Systems in this category are prohibited. Examples include real-time remote biometric identification in public spaces or social scoring systems, as well as the use of subliminal influence techniques that exploit the vulnerabilities of specific groups.

High-risk systems are allowed but must meet multiple requirements and pass a compliance assessment. This assessment must be completed before the system is placed on the market. The systems must also be registered in an EU database that will be set up. Handling high-risk AI systems requires an adequate risk management system, recording capabilities and human supervision. Adequate management of the data used for training, testing and validation must be ensured, as well as controls to ensure the cybersecurity, robustness and integrity of the system.

Examples of high-risk systems include those related to the operation of critical infrastructure, systems used in hiring processes or employee evaluations, credit scoring systems, automated insurance claims processing or the determination of risk premiums for customers.

Other systems are considered limited or minimal risk. For these, transparency is required, i.e. the user must be informed that what they are interacting with is generated by artificial intelligence. Examples include so-called chat bots or deep fakes, which are not considered high risk, but where users need to know that there is artificial intelligence behind them.

It is recommended for all operators of artificial intelligence systems to implement a code of conduct for ethical artificial intelligence. In particular, general-purpose AI (GPAI) models, including foundational models and generative AI systems, are subject to a separate classification framework. The AI Act takes a tiered approach to compliance obligations, distinguishing between high-impact GPAI models with systemic risk and other GPAI models.

If you are a supplier, implementer, importer, distributor, or someone affected by artificial intelligence systems, you need to make sure your artificial intelligence practices are compliant with the new regulations. To complete the process of assessing full compliance with the AI Act, you should take the following steps: (1) assess the risks associated with artificial intelligence systems, (2) raise awareness, (3) design ethical systems, (4) assign responsibility, (5) stay current, and (6) establish formal governance. By taking proactive steps now, you can avoid potential significant sanctions for your organization once the regulation goes into effect.

Penalties for non-compliance with the AI Act are significant and can have a serious impact on a supplier or implementer's business. They range from €7.5 million to €35 million, or 1% to 7% of global annual turnover, depending on the severity of the violation. It is therefore important for stakeholders to make sure they fully understand the AI Act and comply with its provisions.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More