As artificial intelligence (AI) continues to reshape industries and become integrated in everyday life, the question of how to effectively govern the risks associated with AI technologies has become an urgent legal issue. AI is increasingly integrated into products and services that consumers are interacting with – ranging from autonomous vehicles to medical devices to smart home technologies – raising significant concerns about the potential for harm. As AI systems become more sophisticated in the quest to achieve artificial general intelligence, they rely on multi-layered neural networks to process unstructured data, seek hidden patterns, and engage in unsupervised learning.
The AI systems' autonomy and its ability to learn, as well as the complexity of the models, makes their decision-making processes opaque and difficult to trace. This complexity, combined with the lack of human supervision over the decision-making process as well as the processing of enormous volumes of data, increases the risk that AI-driven decisions may cause personal injury, property damage, or financial losses; yet these factors also make it more challenging to pinpoint the exact cause of harm and hold any party accountable. Given that AI systems evolve autonomously and may learn from vast datasets in ways that are difficult to predict, the traditional frameworks of product liability will need to adapt to the new reality.
Product liability laws are designed to determine responsibility when a product causes harm, but they were not originally crafted with AI in mind. AI presents unforeseen challenges to manufacturers and regulators. This has led to growing concerns among regulators worldwide, including in the European Union (EU), the United States, and Canada, about whether the existing legal frameworks are obsolete and can no longer deal with this emerging technology, and whether new regulations should be created to address the specific challenges AI presents.
The emerging consensus in many jurisdictions is that organizations should be held liable for damages caused by their AI systems. However, several complex questions remain: How should liability be attributed when an AI system is autonomous and capable of evolving its decision-making over time? How can causation be traced when the outputs of AI systems may be unpredictable? What level of responsibility should be placed on AI developers and deployers to mitigate risks without stifling innovation? These questions underscore the need for legal frameworks that balance consumer protection with technological advancement.
Understanding the EU Proposed Directives on Artificial
Intelligence
The EU has taken a significant step toward addressing these
challenges with two key legal proposals introduced in September
2022. The first is a reform of the 1985 Product Liability
Directive, which expands the scope of regulated products to include
AI systems, software, and digital products. Under this reform, a
strict liability regime would apply, meaning that victims only need
to prove that the AI product was defective, that they suffered
damage (such as injury, property damage, or data corruption), and
that the defect directly caused the damages. The directive notably
will have extraterritorial application, meaning that victims harmed
by AI systems developed outside the EU can still seek compensation
within the EU. Another key aspect of this reform is the imposition
of ongoing responsibilities on developers to monitor and maintain
AI systems after deployment, ensuring their safety and continued
functionality as they evolve and learn.
The second proposal is the AI Liability Directive, which focuses on fault-based liability and introduces measures designed to simplify the legal process for victims seeking compensation for AI-induced harm. One of the most significant provisions of this directive is the presumption of causality, which allows courts to assume a causal link between noncompliance with an applicable law and harm caused by AI systems, shifting the burden of proof onto the defendant. Thus, for example, if an organization fails to comply with the provisions of the EU Artificial Intelligence Act (discussed below), courts would presume that the organization is liable for any harm caused, and the defendant would need to prove otherwise. Additionally, the directive empowers courts to compel the disclosure of technical information about high-risk AI systems, including development data, compliance documentation, and testing results, which could provide crucial evidence in legal proceedings.
These two proposals, currently under negotiation, aim to create a more transparent and accountable legal framework for AI, seeking to provide possible victims of AI-related damages with clear pathways to redress. By operating in parallel, the two directives provide complementary routes for addressing AI risks along the traditional strict liability and fault-based regimes.
EU AI Act: A Risk-Based Approach to Governance
In terms of a substantive law regulating AI (which can be the basis
of the causality presumption under the proposed AI Liability
Directive), the European Union's Artificial Intelligence Act
(AI Act) entered into force on August 1, 2024, becoming the first
comprehensive legal framework for AI globally. The AI Act applies
to providers and developers of AI systems that are marketed or used
within the EU (including free-to-use AI technology), regardless of
whether those providers or developers are established in the EU or
a separate country.
The EU AI Act sets forth requirements and obligations for
developers and deployers of AI systems in accordance with
risk-based classification system and a tiered approach to
governance, which are two of the most innovative features of the AI
Act. The Act classifies AI applications into four risk categories:
unacceptable risk, high risk, limited risk, and minimal or no risk.
AI systems deemed to pose an unacceptable risk, such as those that
violate fundamental rights, are outright banned. Examples are
social scoring when used by governments, categorizing persons based
on biometric data to make inferences about attributes, or use of
internet or CCTV footage for facial recognition purposes.
High-risk AI systems, which include areas such as health care, law
enforcement, and critical infrastructure, will face stricter
regulatory scrutiny and must comply with rigorous transparency,
data governance, and safety protocols. The transparency requirement
means that the providers must clearly communicate how their AI
operates, including its purpose, decision-making processes, and
data sources. Furthermore, users must be informed when they are
interacting with an AI system. The goal is to create a sense of
accountability, particularly for applications that significantly
impact people's lives, such as AI-driven hiring tools or
autonomous decision-making systems in public services.
One of the most significant aspects of the new directives is the emphasis on ethical AI use. Developers and businesses must ensure that their AI systems respect fundamental rights, adhere to nondiscrimination policies and protect personal data. The EU is prioritizing the concept of human-centric AI, meaning systems should support and enhance human capabilities rather than replace or undermine them.
Regulation of General Purpose AI systems
General purpose AI systems, or GPAI, are designed to perform a wide
variety of tasks, multi-task, scale to address more complex or more
specific challenges, transfer learning, and automate a range of
tasks traditionally requiring human input. An example of such
systems is OpenAI's GPT series. GPAI is contrasted with narrow
artificial intelligence, which may be used to address one narrow
task, such as a voice assistant or an obstacle avoidance system.
The AI Act imposes transparency obligations and certain
restrictions on the use of GPAI models. For example, systems
intended to directly interact with humans must be clearly marked as
such, unless this is obvious under the circumstances.
Providers of all GPAI models will be required to:
- Maintain technical documentation of the model and training results, including training and testing process and evaluation results
- Draw up instructions for third-party use, i.e., information and documentation to supply to downstream providers that intend to integrate the model into their own AI systems
- Establish policies to comply with EU copyright laws and specifically text and data mining opt-outs
- Provide to the AI Office a detailed summary about the content used for training the GPAI model.
All providers of GPAI models that present a systemic risk
– open or closed – must conduct model evaluations,
perform adversarial testing, track and report serious incidents,
and ensure cybersecurity protections. GPAI models present systemic
risks when they have "high impact capabilities," i.e.,
where the cumulative amount of compute used for its training is
greater than 1025 floating point operations (FLOPs).
Free and open license GPAI model providers only need to comply with
copyright laws and publish the training data summary, unless they
present a systemic risk.
All GPAI model providers may demonstrate compliance with their
obligations if they voluntarily adhere to a code of practice until
European harmonized standards are published, compliance with which
will lead to a presumption of conformity. Providers that do not
adhere to codes of practice must demonstrate alternative adequate
means of compliance for European Commission approval.
Applicability Timelines
Organizations will have approximately two years to adjust to these
new regulations, with some provisions taking effect earlier: 6
months for prohibitions; 12 months for the governance rules and the
obligations for general-purpose AI models; and 36 months for the
rules for AI systems embedded into regulated products. In the
summer of 2024, the European Commission also launched a
consultation on a Code of Practice for providers of GPAI models
that will address the requirements for transparency,
copyright-related rules, and risk management. The Code of Practice
is expected to be finalized by April 2025. Additionally, in early
2024, the European Commission established the new AI Office,
endowed with exclusive jurisdiction to enforce the AI Act's
provisions related to GPAI and the power to request technical
documentation to assess compliance with the law. The AI Office also
oversees the AI Act's enforcement and implementation with the
member states.
Impact on U.S. businesses
The extraterritorial application of the AI Act and the proposed AI
Liability Directive and the reform of the 1985 Product Liability
Directive will have widespread implications for American businesses
operating in Europe. Given that these laws apply not only within
the EU but also to businesses outside its borders – such as
American firms that sell or use products using AI in Europe –
compliance will necessitate significant operational and legal
adjustments for U.S. companies that will touch on several key
areas, including product development, data management, corporate
governance, and transparency, with the goal of reducing risk,
ensuring compliance, and protecting both consumers and
organizations from potential liabilities.
While the new regulations are strict, the regulators emphasize that
they are not designed to stifle innovation. The EU has introduced
several initiatives to support research and development within the
AI space, including regulatory "sandboxes" that provide
companies with a controlled environment to test new AI technologies
before full-scale deployment, while ensuring compliance with EU
regulations.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.