EU – New Technology
The European Commission has published its proposal for a directive on adapting non-contractual civil liability rules to artificial intelligence (the "AI Liability Directive"). The Commission's press release stated that the purpose of the AI Liability Directive is to "improve the functioning of the internal market by laying down uniform rules for certain aspects of non-contractual civil liability for damage caused with the involvement of AI systems".
- 19 February 2020 – The European Commission published a White Paper on AI, in which it identified the specific challenges posed by AI to existing liability rules.
- 18 October 2021 – The European Commission launched a public consultation on adapting liability rules to the digital age and AI.
- 10 January 2022 – The European Commission closed the consultation and published a summary report (for further information, see our post on this here).
- 28 September 2022 – The European Commission adopted two proposals to adapt liability rules to the digital age. Firstly, a proposal for the AI Liability Directive (the "Proposal"), and secondly, a proposal to modernise the existing Product Liability Directive to better address digital products and the circular economy.
- The European Commission's Proposal introduces targeted harmonisation of national liability rules for AI across Member States. The aim of this is to aid victims of AI-related damage in receiving fair compensation.
- This follows the European Commission's White Paper on AI, which focused on challenges posed by AI to existing liability rules. The European Commission is attempting to tackle these challenges by both updating the existing Product Liability Directive and introducing the new AI Liability Directive.
- The AI Liability Directive will cover claims outside the scope of the existing Product Liability Directive. For example, cases in which damage is caused by wrongful behaviour, including breaches of privacy, or damages caused by safety issues. The European Commission provides the example that the new Directive will make it easier to obtain compensation if someone has been discriminated against in a recruitment process involving AI technology.
- There is no concrete timeline for the implementation of this directive at this stage. The next steps will involve the Proposal being adopted by the European Parliament and Council.
What it hopes to achieve
- The European Commission states that the purpose of the AI Liability Directive is to "lay down uniform rules for access to information and alleviation of the burden of proof in relation to damages caused by AI systems, establishing broader protection for victims, and fostering the AI sector by increasing guarantees".
- It is proposed that the AI Liability Directive will address specific difficulties of proof linked with AI. For example, the proposal requires Member States to ensure national courts are empowered to order disclosure of relevant evidence about specific high-risk AI systems.
Who does it impact?
- The AI Liability Directive will impact both users and
developers of AI systems:
- For developers of services or products based on AI, there is currently uncertainty as to whether they may be held accountable in the event of a failure of an AI system.
- For individuals or businesses who are victims of crimes associated with AI systems, the AI Liability Directive will facilitate the recovery of compensation and streamline the legal processes in this space.
- Presumption of causality
- The proposed AI Liability Directive simplifies the legal process for victims when proving that a certain fault led to damage by alleviating the existing burden of proof.
- The AI Liability Directive will provide that where victims can show that someone was at fault for not complying with an obligation relevant to the harm caused and a causal link to the AI performance seems "reasonably likely", national courts can presume that the non-compliance caused the damage. This allows victims to benefit from the 'presumption of causality'.
- This does not preclude the liable person from rebutting the presumption, for example, by asserting that the harm was caused by another factor.
- Access to relevant evidence
- The AI Liability Directive will help victims to access relevant evidence that they may not have been able to under the existing liability regime.
- to hold liable.Victims will be able to ask national courts to order disclosure of information about high-risk AI systems, enabling them to identify the correct entity
- It is proposed that the disclosure will be subject to safeguards to protect sensitive information, such as trade secrets.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.