In our last update, we relayed reports of the final roadblocks to adoption of the EU's AI Act. The AI Act has been in development in the EU since 2017 when the European Council called for a "sense of urgency to address emerging trends including artificial intelligence..., while at the same time ensuring a high level of data protection, digital rights and ethical standards." The EU sought to gain buy-in at the "Trialogue" level – the European Parliament, the EU Commission, and the Member States -- before 2024 to avoid any entanglement with the 2024 EU elections.

These concerns have apparently been resolved following extensive negotiations between the Trialogue stakeholders. The specifics of the compromise reached can be found in the Council of the European Union's memo. While we will have much more to analyze and report on for our clients over the next few weeks, this article will simply hit on a few of the highlights of the compromise.

First, some MEPs sought to effectively ban, rather than restrict the use of real-time facial recognition technology by law enforcement and national security apparatuses. Many feared that such technology, unless banned, would lead to a creeping Chinese-style social credit security state. The compromise clearly bans social-credit applications of real-time biometrics. On the law enforcement front, the many exceptions and circumstances that permitted the ability to utilize real-time biometrics by law enforcement have been clarified and tightened. Real-time biometrics may be used to find specific victims of a crime, to prevent harm to critical infrastructure, and the "localization or identification of a natural person for the purposes of conducting a criminal investigation," which now requires prior judicial approval.

Second, other MEPs wanted to increase the penalties for violations of the AI Act and other governance issues. The compromise now has been resolved by strengthening the role of the EU AI Board and its authority and adjusting penalties. Specifically, the Board now comprises one representative per Member State and the EU Data Protection Supervisor participates as an observer. Each Board Member has a three-year term. The Board, among other things, will collect and share data, facilitate harmonization across the Member States, issue recommendations to the Commission and draft regulation for the Commission. Penalties have been increased, but there is now a two-tiered penalty structure. For example, non-compliance with Article 5 (prohibited practices) subjects a company to a fine of 30M Euros or 6% of annual turnover (think revenue), but for small market entities and startups, those fines are cut in half to 3% of annual turnover. This structure is consistent for different categories of violations. This two-tiered structure likely permitted the final significant compromise relating to general purpose AI tools.

Finally, three of the large and influential Member States – France, Germany and Italy – had concerns over proposals to selectively regulate so-called general purpose AI systems. The idea was that such tools provided by large firms (mostly based in the US) would be more strictly regulated, thus making room for tools provided by smaller firms (mostly based in the EU). France, Germany and Italy argued that this approach might backfire as consumers of AI services might be more trusting of more regulated AI systems. Interestingly, the AI Act initially lacked specific regulations for general purpose AI systems, because they did not exist in 2017 when work on the AI Act began. As a result, the EU's initial approach was to regulate task-specific AI's – for example, an AI that pilots a vehicle, runs a power plant or operates a pacemaker – deemed "High Risk." This has now changed, as Title IA has been added to regulate general purpose AI systems. Essentially, general purpose AI tools will have to comply with the regulations of High-Risk systems when used as a component of such a High-Risk system. Further, updated Article 4b(5) sets forth new requirements for information provision relating to general purpose AI systems, as much of the EU's AI Act was intended as an informational disclosure and collection regime.

At first glance, the EU's compromises look reasonable and the EU's AI Act, which has extensive extra-territorial application, is likely to be at least as influential as the EU's GDPR has become. Many, if not most, providers of AI systems either operate in the EU, their outputs are used in the EU or they themselves are based in the EU. As the AI Act will have broad application to them, and as the market for AI grows, these firms will naturally mold themselves, their products and systems to EU's policy preferences, rather than the US's policy preferences – especially since US's AI efforts are now at least several years behind the EU from a regulatory point of view.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.