ARTICLE
5 September 2024

Is Your Annex III AI System High-Risk? Definitely Maybe.

WF
William Fry

Contributor

William Fry is a leading full-service Irish law firm with over 310 legal and tax professionals and 460 staff. The firm's client-focused service combines technical excellence with commercial awareness and a practical, constructive approach to business issues. The firm advices leading domestic and international corporations, financial institutions and government organisations. It regularly acts on complex, multi-jurisdictional transactions and commercial disputes.
The European Union's Artificial Intelligence Act (AI Act) is shaping up to be one of the most significant regulatory frameworks for AI, potentially setting a global benchmark.
Ireland Technology

The European Union's Artificial Intelligence Act (AI Act) is shaping up to be one of the most significant regulatory frameworks for AI, potentially setting a global benchmark.

While the Act provides a clear classification of what constitutes a “high-risk” AI system, particularly those listed in Annex III, the inclusion of certain derogations adds a layer of complexity. These derogations allow organisations to self-assess and determine whether their AI systems, despite being listed in Annex III, actually warrant the high-risk label.

However, this flexibility also introduces a significant challenge: a lack of clarity. For many organisations, the process of self-assessment may be fraught with uncertainty. The derogations, while offering a route to avoid the heavy burdens of high-risk classification, can also make it more difficult to decisively categorise an AI system. The ambiguity surrounding whether a system is high-risk or not may lead to confusion, potentially increasing the risk of non-compliance.

Moreover, the responsibility of making this judgement falls squarely on the shoulders of the organisations, raising the stakes considerably. Misjudging the risk level could result in severe regulatory consequences, not to mention reputational damage. This article delves into the intricacies of these derogations, offering insights to help organisations navigate this uncertainty and make informed decisions about their AI systems.

Navigating the AI Act's requirements is not merely about ticking boxes; it's about understanding the nuanced risks and ensuring that your AI deployment is both innovative and compliant. With the right approach, organisations can leverage these derogations to their advantage, but only if they fully grasp the implications and responsibilities that come with them.

The Core of Annex III: High-Risk AI Systems

Annex III of the AI Act lists specific AI systems that are presumed to be high-risk due to their potential to impact health, safety, and fundamental rights. These systems are involved in areas such as critical infrastructure, education, employment, law enforcement, and more. For organisations, this classification means significant compliance requirements, including third-party conformity assessments, which can be both time-consuming and costly.

The rules on Annex III high-risk AI systems do not come into force until 2 August 2026, however, we are already advising organisations on compliance. The AI products being released now, and being developed now, will need to be in compliance, and most organisations are finding that AI Act compliance is a complex task which takes a lot of time.

The key insight here is that the AI Act recognises not all AI systems within these categories are created equal. The risk associated with an AI system can vary significantly depending on how it is used, the scope of its application, and the extent to which it influences outcomes. This is where the self-assessment derogations become particularly valuable, allowing organisations to apply a more nuanced, context-specific analysis to their AI systems.

The Derogations: Is Your AI System Truly High-Risk?

Despite an AI system being listed in Annex III, it may not necessarily be high-risk if it meets specific criteria outlined in Article 6 of the AI Act. Organisations should view these derogations as a tool for refining their compliance strategy, allowing them to avoid unnecessary burdens while still adhering to the Act's overarching goals of safety and fundamental rights protection.

While the derogations are specifically directed at providers, users of AI systems must also be aware of these classifications, as the deployment and use of high-risk AI systems come with their own set of compliance obligations. Users must ensure that the AI systems they deploy have been properly assessed and documented by the providers, particularly when dealing with systems that might fall under the high-risk category.

  1. Narrow Procedural Task (Article 6(3)(a)):
    • If your AI system is designed to perform a narrow, procedural task within a broader process, it may not be classified as high-risk. For example, an AI that automates a specific administrative task without influencing the overall decision-making process could fall under this exemption. This highlights the importance of clearly defining the scope of your AI system's function. If your system's role is tightly focused and does not extend to critical decision-making, you may have a case for exemption. However, it is crucial to document this scope precisely to support your claim.
  2. Improving a Completed Human Activity (Article 6(3)(b)):
    • AI systems that enhance the results of a task already completed by a human might also be exempt from the high-risk classification. For instance, an AI that refines data post-human decision could be considered non-high-risk. Organisations should consider how their AI systems integrate into existing workflows. If your AI is augmentative—meaning it supports or enhances human judgement rather than replacing it—this could be a strong argument for non-high-risk classification. This insight is particularly valuable for sectors like finance or healthcare, where AI is often used to support expert analysis.
  3. Detection of Decision-Making Patterns (Article 6(3)(c)):
    • AI systems designed to detect patterns in decision-making, but not to replace or heavily influence human decisions, may be exempt. The requirement for proper human review is crucial here. For organisations, this derogation offers a pathway to compliance if they can demonstrate that their AI system's outputs are always subject to human oversight and do not autonomously influence critical decisions. This could involve implementing robust review processes and ensuring that human operators are fully trained to interpret and act on AI-generated insights. This approach not only supports compliance but also enhances trust in AI-driven processes.
  4. Preparatory Tasks (Article 6(3)(d)):
    • AI systems performing preparatory tasks for broader assessments could also avoid the high-risk label. For example, an AI that gathers and organises data to assist in decision-making, without making the decision itself, could be considered non-high-risk. Organisations should carefully assess the role of AI in their processes—if the AI's role is purely preparatory and the final decision rests with a human, this could significantly reduce the system's risk profile. This insight is particularly relevant for industries like law and public administration, where AI is often used to support decision-making rather than execute decisions autonomously.
  5. Profiling Exception:
    • It is important to note that AI systems involved in profiling individuals will always be classified as high-risk. This is non-negotiable. For organisations, this means that any AI system designed to categorise or profile individuals—especially in ways that could affect their rights or opportunities—must be treated with the utmost caution and subjected to the full rigour of the high-risk compliance framework. This insight serves as a reminder that while derogations offer flexibility, they do not dilute the Act's commitment to protecting individuals from potentially harmful uses of AI.

Documenting the Self-Assessment

Article 6(4) of the AI Act says that a provider who considers that an AI system referred to in Annex III is not high-risk must document its assessment before that system is placed on the market or put into service, and will be subject to the registration obligation set out in Article 49(2). The registration obligation in Article 49(2) states that before placing on the market or putting into service an AI system for which the provider has concluded that it is not high-risk according to Article 6(3), that provider must register themselves and that system in the new EU database for high-risk AI systems.

This means that even if you decide you aren't high-risk, providers still need to register and still have certain regulatory obligations.

Upon request of national competent authorities, the provider must provide the documentation of the assessment.

Therefore, if your organisation determines that an AI system does not meet the high-risk threshold under these derogations, the assessment process needs to be carefully recorded and documented. This documentation should include a detailed explanation of why the system qualifies for the derogation, supported by evidence and analysis. This is not just a formality; it is a critical safeguard against regulatory scrutiny. In the event of an audit or investigation, robust documentation can protect your organisation from penalties and demonstrate your commitment to responsible AI deployment.

Moreover, the European Commission will provide further guidelines and practical examples within 18 months of the Act's entry into force. Staying informed about these guidelines will be crucial for organisations seeking to navigate compliance effectively. This proactive approach to compliance—anticipating regulatory updates and adapting your strategies accordingly—will be essential in maintaining a compliant and competitive position in the market.

The Role of the European Commission

The Commission's ability to amend the conditions for derogations through delegated acts provides the regulatory framework with necessary flexibility. For organisations, this means that the criteria for high-risk classification can evolve as new risks emerge or as AI technology advances. Keeping an eye on these developments and being ready to adapt your AI systems and compliance strategies will be crucial. This dynamic regulatory environment requires organisations to be both vigilant and agile, ensuring that their AI systems remain compliant even as the rules shift.

Conclusion

The AI Act places a significant burden on organisations to proactively assess whether their AI systems fall under the high-risk category. It is crucial for organisations to take action to evaluate their AI systems against the criteria in Annex III and the associated derogations. Conducting a thorough self-assessment is not just a regulatory requirement; it is a strategic necessity to avoid costly mistakes.

Whether your AI system is ultimately classified as high-risk or not, you will still face registration and compliance obligations under the AI Act. This includes rigorous documentation and, in many cases, the need for third-party conformity assessments. Remember, profiling AI systems will always be classified as high-risk, without exception, so if your system engages in profiling, you must be prepared to meet the highest standards of compliance.

Given the complexity and potential consequences of misclassification, it is imperative to seek legal advice sooner rather than later. Understanding the nuances of the AI Act and how it applies to your specific AI systems requires expert guidance. Engaging with legal professionals who specialise in AI regulation will help ensure that your organisation not only complies with the law but also remains competitive in this rapidly evolving landscape.

In short, do not delay. Assess your AI systems now, understand your obligations, and seek the legal expertise needed to navigate this challenging regulatory environment. The sooner you act, the better positioned you will be to manage the risks and seize the opportunities that AI offers.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More