ARTICLE
25 October 2024

Measures In Support Of Innovation In The European Union's AI Act – AI Regulatory Sandboxes

W
WilmerHale

Contributor

WilmerHale provides legal representation across a comprehensive range of practice areas critical to the success of its clients. With a staunch commitment to public service, the firm is a leader in pro bono representation. WilmerHale is 1,000 lawyers strong with 12 offices in the United States, Europe and Asia.
This blog post discusses the EU Artificial Intelligence Act's ("AI Act") measures in support of innovation. These measures will be particularly relevant for companies engaged in research and development activities.
European Union Privacy

This blog post discusses the EU Artificial Intelligence Act's ("AI Act") measures in support of innovation. These measures will be particularly relevant for companies engaged in research and development activities.

In previous blog posts, we discussed the risk-based approach of the AI Act and provided details about prohibited and limited-risk AI systems. We also discussed the requirements and stakeholders' obligations associated with high-risk AI systems, what companies need to know if they just want to use AI, and how the AI Act regulates generative AI.

Regulatory Sandboxes

A regulatory sandbox is a tool that allows businesses to explore and experiment with new and innovative products, services, or businesses under a regulator's supervision. It provides innovators with incentives to test their innovations in a controlled environment, allows regulators to better understand the technology, and aims to foster consumer choice in the long run.

Over the past years, the sandbox approach has gained considerable traction across the EU as a means of helping regulators address the development and use of emerging technologies in a wide range of sectors, including fintech, transport, energy, telecommunications, and health.

Regulatory Sandboxes in the AI Act (Chapter VI)

The AI Act requires each EU Member State to have at least one operational regulatory AI sandbox (or joint sandboxes with other EU Member States) by August 2, 2026 (see our blog post about the road to full applicability of the AI Act).

Sandboxes should provide a controlled environment for innovation, supporting the development, training, testing, and validation of AI systems under regulatory supervision for a limited period before their placement on the market or entry into service. This should be done according to a sandbox plan agreed on by the prospective providers and the competent authority. Sandboxes may also include testing under real-world conditions within the sandbox environment.

  • Authorities' Role. The competent authorities must provide guidance, supervision, and support within the AI regulatory sandbox to identify risks. They must provide written proof of the activities successfully carried out in the sandbox and an exit report detailing the activities carried out in the sandbox and the related results and learning outcomes. Providers (i.e., companies developing AI systems) may use such documentation to demonstrate their compliance with the AI Act as part of the conformity assessment process (and accelerate such process) or relevant market surveillance activities. If appropriate, the competent data protection authorities must be associated with the operation of the sandbox.
  • Risk Mitigation. Any significant risks to health, safety, and fundamental rights identified during the development and testing of AI systems must be adequately mitigated. The national competent authorities can temporarily or permanently suspend the testing process or participation in the sandbox if no effective mitigation is possible.
  • Liability. Providers and prospective providers participating in the AI regulatory sandbox remain liable for any damage inflicted on third parties as a result of the experimentation taking place in the sandbox.
  • No Fines. No administrative fine should be imposed for infringements of the AI Act during this process so long as the prospective providers observe the sandbox plan and the terms and conditions for their participation and follow in good faith the guidance given by the national competent authority. The same applies regarding infringements of other laws, provided the authorities responsible for such laws are involved in the supervision of the AI system in the sandbox and have provided guidance for compliance.
  • Implementing Act. The European Commission ("Commission") will adopt implementing acts to specify detailed arrangements for creating, developing, implementing, operating, and supervising AI regulatory sandboxes. These implementing acts will establish terms and conditions applicable to participants; common principles on the eligibility and selection criteria for participation; and procedures for application, participation, monitoring, exiting, and termination.

AI Regulatory Sandboxes and Personal Data

As a general principle, EU data protection law, including the EU General Data Protection Regulation ("GDPR") remains unaffected by the provisions of the AI Act and will also apply to AI regulatory sandboxes. As an exception to that principle, Article 59 of the AI Act provides that personal data lawfully collected for other purposes may be processed in an AI regulatory sandbox solely for the purpose of developing, training, and testing certain AI systems in the sandbox.

This approach is very restrictive, however, as it only applies when 10 cumulative conditions are met.

  • One such condition is that AI systems must be developed to safeguard substantial public interests in areas such as public safety and health, environmental protection, energy sustainability, transport systems and mobility, critical infrastructures and networks, and public services.
  • Another condition is that the personal data processed must be necessary for complying with the requirements for high-risk AI systems where those requirements cannot effectively be fulfilled by processing anonymized, synthetic, or other nonpersonal data.
  • A particularly challenging condition is that any processing of personal data in the context of the sandbox may not lead to measures or decisions affecting the data subjects and may not affect the application of their data protection rights.

Testing of High-Risk AI Systems in Real-World Conditions Outside AI Regulatory Sandboxes

Providers or prospective providers of specific high-risk AI systems listed in Annex III of the AI Act may test such systems in real-world conditions, outside AI regulatory sandboxes, subject to the conditions outlined below (Article 60). This applies to AI systems such as biometrics, critical infrastructures, education and vocational training, employment, worker management, and access to self-employment.

  • Conditions. Testing in real-world conditions can only take place where all the following conditions are met. The Commission will specify the detailed elements of the real-world testing plan in implementing acts.
    • The (prospective) provider has drawn up a real-world testing plan and submitted it to the market surveillance authority where the testing is to be conducted;
    • The competent market surveillance authority has approved the testing. Such approval may be considered granted in the absence of any response within 30 days, unless otherwise specified by national law;
    • The (prospective) provider has registered the testing in the nonpublic part of the EU database maintained by the Commission – this does not apply to critical infrastructures;
    • The (prospective) provider conducting the testing is established in the EU or has appointed a legal representative in the EU;
    • The data collected and processed for the purpose of the testing is not transferred to third countries, unless appropriate and applicable safeguards under EU law are implemented;
    • The testing lasts no longer than necessary to achieve its objectives and in any case no longer than six months, which may be extended for an additional period of six months;
    • Subjects of the testing who are vulnerable persons due to their age or physical or mental disability are appropriately protected;
    • When (prospective) providers and deployers collaborate on testing, deployers must be informed of all aspects of the testing that are relevant to their decision to participate and given relevant instructions for use. The (prospective) provider and deployer must agree on their roles and responsibilities to meet testing requirements;
    • The subjects of the testing have given their free and informed consent, which requires, among other things, that they have been given information about their rights and the nature and objectives of the testing; any possible inconvenience the testing may cause; and the conditions under which the testing will be conducted;
    • The testing is overseen by the (prospective) providers and deployers through persons who are suitably qualified and have the necessary capacity, training, and authority to perform their tasks; and
    • The predictions, recommendations, or decisions of the AI system can be effectively reversed and disregarded.
  • Conditions. Any subjects of the testing may, without any resulting detriment and without having to provide any justification, withdraw from the testing at any time by revoking their informed consent and may request the immediate and permanent deletion of their personal data. The withdrawal of the informed consent does not affect the lawfulness or validity of activities already carried out.
  • Authorities' Checks. Market surveillance authorities can require (prospective) providers to supply information, carry out unannounced remote or on-site inspections, and perform checks on the development of the testing and the related products to ensure the safe development of testing.
  • Incident Reporting. (Prospective) providers must report to the competent national market surveillance authority any serious incident identified during the testing and adopt immediate mitigation measures or, failing that, suspend the testing until such mitigation takes place, or otherwise terminate it. The (prospective) provider must have a procedure for the prompt recall of the AI system upon such termination of the testing. Authorities must be notified accordingly.
  • Liability. The (prospective) provider must be liable for any damage caused during the testing.

Measures in Support of SMEs and Start-Ups

The AI Act requires EU Member States to adopt four key measures to support SMEs and start-ups:

  • Provide those who have a registered office or a branch in the EU with priority access to the regulatory sandboxes;
  • Organize specific awareness-raising and training activities on the application of the AI Act tailored to their needs;
  • Use dedicated channels for communication with them to provide advice and respond to queries about the implementation of the AI Act; and
  • Facilitate their participation in the standardization development process.

Derogations for Specific Operators

Companies that employ fewer than 10 persons and whose annual turnover does not exceed €2 million may comply with certain elements of the quality management system in a simplified manner, subject to additional requirements regarding their size (see our blog post here for more detail on quality management systems). The Commission will develop guidelines on this.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More