On December 8, 2023, European Union policymakers brokered a deal on a broad law to regulate the development and use of artificial intelligence (AI) in the European Union.

The legislation, known as the Artificial Intelligence Act, or AI Act, must still be voted on by the European Parliament; this vote is expected by early 2024, with an effective date two years after passage. Violations of the AI Act have the potential to be severe for organizations, with fines potentially targeted to 7 percent of global sales.

Although the AI Act may be subject to additional modification as it is finalized, when passed, the AI Act will be the first comprehensive law in the world regulating AI. It takes what some view as a commonsense "risk-based approach" to regulating AI: the higher a tool's risk, the stricter the rules governing it.

Quick Hits

  • Under the proposed AI Act, AI systems presenting only limited risk would be subject to light transparency obligations, such as disclosing that content is AI-generated.
  • High-risk AI systems would be permitted, but subject to requirements, including a "fundamental rights impact assessment" before deployment of the tool in the EU market. High-risk AI systems include applications related to transport, education, employment, and welfare.
  • AI systems deemed to pose an unacceptable risk would be banned from the EU. Such systems include those used for cognitive behavioral manipulation and emotion recognition in the workplace.

The EU Parliament's Committee on Civil Liberties, Justice and Home Affairs and Committee on the Internal Market and Consumer Protection adopted a draft negotiating mandate on May 11, 2023.

Currently, many employers use or are considering using AI systems in the workplace in ways that, with some exceptions, would be deemed high-risk under the proposed law. For example, the AI Act would identify as "high-risk" AI systems intended to be used:

  • "for recruitment or selection of natural persons, notably to place targeted job advertisements, to analyze and filter job applications, and to evaluate candidates"; and
  • "to make decisions on promotion and termination of work-related contractual relationships, to allocate tasks based on individual behavior or personal traits or characteristics and to monitor and evaluate performance and behavior of persons in such relationships."

The policymakers drafting the EU AI Act have been explicit that they expect and hope the act will serve as a framework for additional regulation throughout the world. It comes on the heels of a recent executive order issued by the Biden administration seeking agency action on AI in the United States, and a recent New York City law setting forth various auditing and disclosure requirements for AI in employment use, as well as draft rules under consideration by California's Privacy Protection Agency, and the formation of legislative study groups across the United States. It remains to be seen if and how the EU AI Act would impact these and other U.S. AI regulatory developments.

Employers using AI tools, and particularly those with cross-border operations, may wish to continue to follow these developments and evaluate proactive compliance efforts as they deploy these systems.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.