ARTICLE
28 October 2025

The 'Great Reversal' In European AI Governance The Withdrawal Of The AILD

GZ
George Z. Georgiou & Associates LLC

Contributor

With one of the top ranked teams on the island, comprising of dedicated specialised and experienced lawyers, arbitrators, mediators and legal consultants (including former judges), we are engaged in remaining at the forefront of legislative trends, and providing prompt and fully coordinated legal advice, to any client, on most areas of law.
As artificial intelligence rapidly evolves, governments worldwide play a balancing game between fostering innovation while managing associated risks.
Cyprus Technology
George Z. Georgiou & Associates LLC are most popular:
  • within Technology, Intellectual Property and Antitrust/Competition Law topic(s)
  • with readers working within the Healthcare industries

As artificial intelligence rapidly evolves, governments worldwide play a balancing game between fostering innovation while managing associated risks. The European Union has emerged as a frontrunner in this regulatory landscape, pledged to adopt a 'human-centric approach to AI, placing particular emphasis on one of AI's most complex legal challenges: civil liability. The question of who bears responsibility for AI-induced damage has become extremely urgent, as AI systems are increasingly becoming more complex and opaque, which makes proving liability exceptionally difficult for victims.

Recognizing this critical gap in legal protection, the EU sought to develop a comprehensive regulatory response. With the EU AI Act in force from 1 August 2024, attention had turned to the adoption of an AI Liability Directive (AILD). The proposal on AILD was meant to create a unified regulatory framework for AI technologies in the EU. Nevertheless, unlike the AI Act, the AILD encountered a very different fate, resulting in its recent withdrawal.

THE AILD

On July 2025, the European Parliament's Policy Department for Justice, Civil Liberties and Institutional Affairs published a study critically analyzing civil liability for AI systems. The study aimed to explore a plethora of challenges individuals face when having to identify the liable entity and at the same time critically analyze the EU's evolving approach to regulating civil liability for AI. The study fundamentally critiques the AILD's effectiveness as a legal framework, characterized as procedurally complex and substantively vague, risking legal uncertainty and undermining the directive's practical implementation.

As per the study, the proposal on AILD recognized that emerging technologies like AI challenged traditional fault-based liability law as it struggled to accommodate AI autonomy, opacity, and unpredictability. While it proposed procedural mechanisms like presumptions of causation and enhanced evidence rules to support claimants, the directive would give national courts the power to order disclosure of evidence suspected of causing AI-related harm. As a result, the proposal on the AILD does not aim to harmonise the liability rules, it will not decide which AI systems are subject to a high-risk profile and thus fall into the scope of strict liability.

The AILD was meant to work alongside the Product Liability Directive (PLD), which dealt with strict liability regime for defective products eliminating the need for claimants to prove fault or/and negligence on the producer or manufacturer. The AILD on the other hand, was limited to circumstances requiring plaintiffs to pursue non-contractual fault-based liability claims for AI-induced damage. (claims in negligence). As a consequence, if a victim claims damage based on strict liability or contractual liability mechanisms, they could not benefit from the alleviated burden of proof offered by the AILD. Ultimately, this limited scope offers little substantive harmonization and comes short in resolving the central issue of attributing legal responsibility in AI-related incidents. Without resolving the fundamental challenge of identifying a legally responsible party, the AILD offered limited practical solutions for the very problem it sought to address.

CRITICISM

The withdrawal drew mixed responses in the EU. Certain stakeholders welcomed the decision, labelling the proposal as "unnecessary and premature", while others continued to support the harmonisation of AI liability arguing that AI liability rules would prevent fragmentation and strengthen trust in AI. For instance, civil society organizations expressed concerns over the anticipated withdrawal and advocated for a stronger AI liability framework. In contrast others argued that the existing EU legislation sufficiently protects and ensures the safety of AI on the market. Consequently, the decision to withdraw the AILD, explained in the Annex as being due to "no foreseeable agreement" among Member States.

MOVING FORWARD

Nevertheless, the absence of dedicated rules does not necessarily mean that victims are unprotected, Member States already have well-established fault-based liability systems which in theory can respond to novel harms, such as AI proliferation. These systems, while imperfect, can adapt to the new challenges. National courts will be called upon to interpret and apply liability laws in innovative ways.

However, on the other hand, this comes at the cost of efficiency, predictability, and harmonisation. Establishing faults under traditional tort law can be very complex and lead to costly litigation, especially in the AI context. The fragmentation of tort law across Member States exacerbates these challenges, creating substantial disparities in compensation claims, depending on national evidentiary rules, definitions of essential elements, and the applicable liability regimes.

Ultimately, the goal should be to strike a balance between offering victims' fair compensation while fostering innovation. In order to achieve this balance, it requires a careful, context-sensitive approach that both recognizes the complexity of liability law while at the same time addresses the unique challenges posed by AI.

Although the withdrawal of the AILD marks a setback in regulatory progress, the development of a suitable AI liability framework remains an evolving process. However, it appears that for now, the Commission's priority is not being left behind in the pursuit of AI innovation.

CYPRUS

Cyprus transposed the EU Product Liability Directive (85/374/EEC) into national law through the Defective Products (Civil Liability) Law of 1995 (Law 105(I)/1995), with the most recent amendment occurring in 2002. Following the withdrawal of the AILD, the revised PLD represents now the primary regulatory mechanism through which Member States can address AI-related harm. The directive's expanded definition of "product" now explicitly encompasses software and AI systems, thereby providing a strict liability framework for victims of AI-induced damage. Member States are required to transpose the new Product Liability Directive (Directive (EU) 2024/2853) into national law by 9 December 2026, however, at the moment there is no available information regarding Cyprus's legislative intentions, preparatory measures, or timeline for implementing these reforms. Ultimately, it remains to be seen how Cyprus will address this legislative gap in the coming years.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More