Introduction

Adopting new technologies requires trust by society. This entails that victims should be able to claim effective compensation if damage is inflicted upon them by certain new technologies despite all preventative safety rules implemented to avoid such damage. Without such trust, uptake and innovation are hampered. The features of digital technologies and Artificial Intelligence (“AI”) in particular challenge the application of the traditional liability regime, governed by the 1985 Product Liability Directive (“PLD”) and the national liability rules, which may create legal uncertainty for businesses and consumers. Liability regimes in the internal market must be adapted to new challenges that arise in the age of ubiquitous interconnectivity and AI. These should consider the impact of global value chains and the transition to a circular economy which is necessary to reach environmental goals, as in such circular economy it will increasingly be possible to extend the life of materials, upgrade, and repair products and their components. In light of these objectives, the Commission has adopted two Proposals: one that revises the Product Liability Directive (“the Revised PLD”) and another one that introduces an extra-contractual civil liability regime for AI systems (“the AI Liability Directive”).

Both proposals aim to ameliorate the position of the injured person. A balance is struck between on the one hand, the protection of consumers and, on the other hand, innovation with rebuttable presumptions to obtain compensation while laying down guarantees for new technologies, especially AI. While the AI Liability Directive effectuates a non-contractual fault-based civil law claim for compensation of damage inflicted on natural persons and legal entities caused by (the failure of) an output of an AI system, the Revised PLD effectuates a claim for material losses sustained by natural persons in a number of circumstances as a result of new and traditional products.

This contribution is divided into three main sections, serving as an introduction to the proposals and identifies several key issues in a non-exhaustive manner. Firstly, the challenges which corrode the fault-based pillar of the current liability system which is based upon an assumption of static products whose use is predictable for their owner or user, will be examined. The distinctive features of AI-related products and services such as autonomy, lack of transparency, data dependency or self-learning abilities challenge the application of existing liability rules and requirements. Secondly, AI is a game changing technology that provides economic and societal benefits to the entire spectrum of industries and society, but may also pose certain risks to its users and affected individuals. The sector-specific liability rules that the AI Liability Directive intends to implement, will be analysed in light of these risks. Finally, victims who suffer damage to their health or property because of new technologies should have access to the same compensation as victims of traditional products. This contribution will examine whether the Revised PLD succeeds in its goal of equalising compensation regardless of the technology behind the product.

Although the Revised PLD is broad in scope, encompassing both traditional and non-traditional products and services, the contribution focuses its attention on AI systems, software, and data taking into account to the more limited scope of the AI Liability Directive. The contribution relies on Belgian law since extra-contractual liability is to a great extent still regulated under national law.  

Revising the Product Liability Directive to the Digital Age and Circular Economy

As AI and Internet-of-Things (IoT) products become more and more autonomous, their behaviour becomes difficult to predict. A product could change during its life cycle as a consequence of corrupt data sets feeding the algorithm or software updates, obscuring the traceability and explainability of the decision-making process. Software updates may cause bugs or loopholes, exposing the system to be hacked. Apart from data sets and software updates, cybersecurity risks often stem from interactions with other products and services, which directly or indirectly compromise the safety of the product.

The autonomous nature of IoT devices is not the sole cause of the problem. We may understand what IoT does, but not how it does it. The technological complexity of the devices makes it difficult to assess potential safety and liability issues. IoT is fundamentally data-driven and complexities can occur in the stages of the initial collection of data, the processing activities, and the actuation. This complexity is intensified by the interdependency of the devices. Billions of eyes, ears and hands will execute actions computers have decided upon and will then again be seen, heard, and measured by other computers leading to further computed actions. Interaction with other devices, software and data streams makes it difficult to determine where a defect has occurred, even after harm has been clearly established.

Features of IoT and AI can make it difficult to not only identify the potentially liable person but also to prove that person's fault or the defect of a product. IoT and AI obstruct linking potentially problematic decisions to the involvement of such systems and the allocation of responsibilities between different economic operators in the supply chain of AI systems. The involvement of different actors in the value chains - such as the developers of the software/algorithm, the producer of the hardware, owners/keepers of the AI product, suppliers of data, public authorities, and the users of product - make it difficult for victims to understand where things may have gone wrong in the AI supply chain. This informational asymmetry will obstruct them from gathering evidence necessary to build a case, determining whom will eventually be held liable and who to target when damage is inflicted.

Existing causative liability models work well when machine functions can be traced back to human design, programming, and knowledge. Proving that some hardware defect caused damage to a person is difficult let alone establish that the underlying cause was a flawed algorithm. It is even more difficult if the algorithm suspected of causing harm has been developed or modified by the AI system itself using machine learning and/or deep learning techniques, while being fuelled by external data that it has collected since the start of its operation. The effects of these ‘black box' algorithms can be tested but the user cannot understand its inner workings. Furthermore, not all faults will be intentional in nature and may consist of omission rather than action. For example, users may fail to timely intervene in the autonomous operation of an AI system, and developers may fail to provide safety precautions, updates, and adequate monitoring. It is, therefore, becoming increasingly difficult for victims to identify such technologies as even a possible source of harm, and to determine why and how they have caused it.

The AI Liability Directive

The AI Liability Directive lays down uniform requirements for certain aspects of non-contractual civil liability for damage caused by an AI system. The proposal targets a specific aspect of a liability claim, i.e. causality, through provisions on disclosure of evidence and gives defendants the benefit of a rebuttable presumption of non-compliance and a rebuttable presumption of a causal link in the case of fault.

Disclosure of evidence and rebuttable presumption of non-compliance

A court may, upon the request of a (potential) claimant, order the disclosure of relevant evidence about a specific high-risk AI system that is suspected of having caused damaged. Such requests for evidence may be addressed to the provider of an AI system, a person subject to the provider's obligations or its user. Where a defendant fails to disclose or preserve evidence at its disposal as ordered by a national court in a claim for damages, a rebuttable presumption of non-compliance with a duty of care is presumed.

The AI Liability Directive uses the concept of ‘duty of care' on several occasions and is defined as ‘a required standard of conduct, set by national or Union law, in order to avoid damage to legal interests recognised at national or Union law level, including life, physical integrity, property and the protection of fundamental rights. It sets the baseline of how a reasonable person should act in a specific situation, which also ensures the safe operation of AI systems to prevent damage to recognised legal interests. Although the notion of duty of care still relies upon national law to determine the additional modalities under which a violation could arise, EU law seemingly intends to elevate this principle as the cornerstone of cybersecurity. The 2nd EU Cybersecurity Strategy, the NIS Directive and its proposed revision, the proposal for a Directive on the resilience of critical entities and the proposal for a new Cyber Resilience Act (CRA) all have minimal technical and organisational measures which would have to be considered in the assessment of duty of care.

A rebuttable presumption of non-compliance with the duty of care to disclose or preserve evidence is in line with the 2019 report of the New Technologies Formation Expert Group, advocating that victims should be entitled to facilitation of proof in case a particular technology increases the difficulties of proving the existence of an element of liability beyond what can be reasonably expected. This would enable claimants to substantiate a non-contractual fault-based civil law claim for damages supposedly caused by high-risk AI systems. Nonetheless, the position of the claimant remains precarious since he is left facing the most problematic aspect of AI data. The provider of the information could meet his obligation by providing enormous amounts of information which the victim has to interpret and analyse.

Rebuttable presumption of a causal link in the case of fault

The (potential) claimant has been provided an effective basis for claiming compensation in case the fault consisting of the lack of compliance with a duty of care under Union or national law. This counteracts the difficulties for victims to identify technologies, including black-box algorithms, as a possible source of harm, and to determine why and how they have caused it. A rebuttable presumption of a causal link between the fault of the defendant and the output produced by the AI system or the failure of the AI system to produce an output applies if three conditions are met: (i) the proof of a defendant's fault (for example, consisting in the non-compliance with a duty of care), (ii) the reasonably likelihood that the fault has influenced the (failure of) output produced by the AI system and (iii) the incurrence of damages. A further distinction is made for claims brought against providers, persons subject to the latter's obligations or users of high-risk AI systems.

Regarding claims against a provider of high-risk AI systems or a person subject to the latter's obligations, the requirement of fault is met when those actors failed to comply with several requirements laid down in the proposed regulation for an AI Act (“the AI Act”). A provider, for example, only commits a fault when the AI system was not designed and developed in a way that: (i) its training models are developed with relevant, representative, free of errors and complete datasets (Art. 10.2-10.4 AI Act), (ii) it meets the transparency requirements (Art. 13 AI Act), (iii) it allows for an effective oversight by natural persons during period of use (Art. 14 AI Act), or (iv) it does not achieve an appropriate level of accuracy, robustness and cybersecurity (Art. 15 and 16(a) AI Act). The requirement of fault is notably met when the claimant proves that the user of a high-risk AI system: (i) did not comply with its obligation to use or monitor the AI system in accordance with the accompanying instructions of use or, where appropriate, suspend or interrupt its use (Art. 29 AI Act), or (ii) exposed the AI system to input data under its control which is not relevant in view of the system's intended purpose (Art. 29.3 AI Act).

However, the AI Liability Act is unclear on it the relationship between the notions of ‘fault' and ‘duty of care', especially in Article 4 that makes a distinction based on the qualification of the defendant. We may reasonably assume that the requirements of the AI Act for a provider of high-risk AI systems, person subject to the latter's obligations or a user shall be interpreted as obligations of means. In this case, a specific duty of care following these provisions would first have to be determined followed by an evaluation whether it has been breached.

In addition to the general right for the defendant to rebut the presumption of causality, the applicability of the presumption of causality hinges upon the risk level of the AI system. Where the defendant demonstrates that sufficient evidence and expertise is reasonably accessible – through logging requirements and documents – for the claimant to prove the causal link, the presumption of causality for high-risk AI systems is not applicable. Of course, national courts would have to determine what constitutes reasonable access for the claimant. A case is currently pending before the CJEU (C-579/21) to determine whether log data constitute personal data and whether data subjects have a right of access to log data kept by the controller. If answered affirmative, this would imply that the right of access under Art. 15 GDPR would first have to be exercised by the (potential) claimant to enjoy the presumption of causality.

In the context of low-risk AI systems, the presumption of causality only applies where the national court considers it excessively difficult for the claimant to prove the causal link. The claimant should neither be required to explain the characteristics of the AI system concerned, such as the autonomy and opacity of (black box) algorithms, nor how these characteristics make it harder to establish the causal link. Both the exception and limitation to the presumption of causality should encourage defendants to fight black-box algorithms by complying with their documenting and recording requirements set out in the AI Act.

Safeguards and remaining challenges

To avoid installing a claims culture and maintain a balance between the involved parties, the AI Liability Directive contains several mitigating safeguards. A court will only order the disclosure of evidence if a potential claimant has asked a provider, a person subject to the latter's obligations or a user to disclose relevant evidence at its disposal about a specific high-risk AI system that is suspected of having caused damage. The potential claimant can only benefit from the disclosure of evidence when the request has been refused. Moreover, the potential claimant's request should be supported by facts and evidence sufficient to establish the plausibility of the contemplated claim for damages.

A claimant, by contrast, should first have undertaken all proportionate attempts to gather the relevant evidence from the defendant. The disclosure and preservation of evidence is also limited to that which is necessary and proportionate to support a (potential) claim for damages. This aims to ensure proportionality and prevent blanket requests. In addition, it emphasises the protection of trade secrets and other confidential information, and requires procedural remedies to be in place for such orders. For example, the disclosure of information relating to an AI system that wrongfully concluded to a health assessment should consider doctor-patient relationships and confidentiality.

Some challenges still remain, most notably regarding the unclarity of terms, the scope of application and the liable parties, affecting legal certainty and causing fragmentation. While the CJEU should aid national courts in interpreting these concepts, the AI Liability Directive should take a clear stance on its relationship with the newly Revised PLD. For example, whereas both a provider and a manufacturer develop the AI system and eventually place it on the market or in service, it is unclear what the difference (under the Revised PLD) is. It is regrettable that the EU Commission has not yet explicitly taken a stance on the inclusion of the ‘(back- and front-end) operators' which are one of the most important actors in the AI supply chain and perform similar functions to manufacturers and providers.

The Revised PLD

 

The Revised PLD aims to address some of the implementation challenges that have been raised by new technologies and the increased role that software plays in products and services. The proposal lays down common rules on the liability of economic operators for damage suffered by natural persons caused by defective products. The broadened scope and legal presumptions offered to victims of a defective product counteracts the difficulties in proving the product's defectiveness and causal link.

Defective products and components of economic operators causing damage

The features of digital technology push the boundaries of what is to be considered as a ‘product' or ‘service'. The PLD of 1985 has always suffered from its unclear stance on the inclusion of software, because software can be integrated into a product but may also be supplied separately as a service. Software has now been explicitly deemed to be a product, including digital manufacturing files. The latter is important in the automation industry as a product now contains the functional information necessary to produce a tangible item by enabling the automated control of machinery or tools, such as drills, mills and 3D printers. For example, the fault can now not only be attributed to the seller of the 3D printer or the seller of the printing material but also the Computer-Aided Design (CAD) file designer. This is a much-needed addition in the heavily automated industry, certainly in the absence of a manufacturer acting as an intermediary between the creator of the digital manufacturing file and the final consumer.

While the assessment of a ‘defect' still relies on the expectation of safety which the public at large is entitled to, the non-exclusive list of circumstances to be considered in this assessment has been expanded. Its scope has been expanded to include non-traditional products by listening, for example, self-learning capabilities and product safety and cybersecurity requirements. Interoperability defects are also included. For example, where an AI system (e.g. a self-driving vehicle) working fine on its own turns out to be unable to communicate with other (AI) systems (e.g. smart traffic lights) that function properly on their own as well.  Furthermore, the presentation of the product may provide for an opportunity to reduce liability through appropriate warnings and user information. It remains to be seen how technically detailed or accessible such information must be to exclude an assessment of a ‘defect'.

The notion of ‘producer' has also been revised in the Revised PLD in order to include, amongst others, manufacturers of components and providers of a related service. These economic operators must not only secure the physical product but also the accompanying digital layer surrounding the product, in particular for cybersecurity reasons. IoT-products are now included since components are ‘digital services integrated or interconnected with a product in such a way that the absence of the service would prevent the product from performing its functions'. This will make IoT-based manufacturers more prone to liability as they often produce low-cost products with high security risks. Providers of related services, such as data streams feeding the algorithm of an IoT product, are also included. It remains to be seen, however, which digital services will be included within the scope of ‘related services'. The claimant can, in addition to the manufacturer of the product, now hold the ‘manufacturer' of the data and online platform providers - under the Digital Services Act (DSA) - liable for the resulting damage. While the list of liable parties is broader than in the PLD, it is still based on the same layered approach.

The Revised PLD also expands on the notion of compensable damage of natural persons to include loss or damage to data. For example, if an AI system such as an automated curator of a government database or an advanced fintech platform would delete, contaminate, modify, encrypt, or leak data, injured persons with a factual interest to the data must be compensated. Privacy aspects and information security are not the only challenges posed by new technologies. IoT can interact with objects in the physical world through physical devices or actuators. Their actions and the consequences thereof are not necessarily limited to a digital environment. IoT products can have a physical impact, potentially implying material/physical damage/harm. For example, what if a digital pill wrongfully analyses the health status of a subject by neglecting important biochemical and physiological information? What if smart fire – or carbon dioxide alarms do not function in case of an emergency? The Revised PLD finally gives an answer to this question by including death or personal injury, including medically recognised harm to psychological health. Harm to, or destruction of, any property is also compensated expect if it concerns (i) the defective product itself, (ii) a product damaged by a defective component of that product and (iii) property used exclusively for professional purposes. Consequently, damage caused by an IoT device that fails to detect deterioration of an industrial machine which leads to downtime of the production line will not be compensated.

As a result of the transition from a linear to a circular economy, sustainable ways of production prolonging the functionality of products and components allow products to be modified. When a product is substantially modified outside the control of the original manufacturer, the injured person is now able to hold the person that made the substantial modification liable in the same manner as the manufacturer of the modified product. Whether or not a modification is substantial, is determined according to criteria set out in relevant EU and national safety legislation, such as modifications that change the original intended functions or affect the product's compliance with applicable safety requirements. It remains to be seen what can be considered to be a substantial modification to data and its relation to the GDPR. Should a principle relating to processing of personal data be violated, or does this also include the use of data for another purpose for which it was initially collected in correspondence with requirements of further processing?

Control of the manufacturer is central in the Revised PLD. It not only determines whether a product is defective, but determines when a party can be held liable and even rely on an exemption from liability. The moment of placing on the market or putting into service is normally the moment at which a product leaves the control of the manufacturer, while for distributors such moment is the moment when they make the product available on the market. However, since digital technologies allow manufacturers to exercise control beyond such moment, manufacturers should remain liable for defectiveness that comes into being after that moment as a result of software or related services within their control. Such software or related services should be considered within the manufacturer's control where they are supplied by that manufacturer or where that manufacturer authorises them or otherwise influences their supply by a third party. This includes machine-learning algorithms and (a lack of) upgrades or updates to address cybersecurity vulnerabilities and maintain the product's safety. Nevertheless, ‘control of the manufacturer' remains an unproven and contested concept. For example, its relationship with data streams subject to substantial modification remains to be seen. In this case, the economic operator's control may be limited and non-exclusive if the product's operation requires data provided by third parties or collected from the environment, and depends on self-learning processes and personalising settings chosen by the user. Furthermore, the combined contributions of separate parties complicate the assessment of control.

Disclosure of evidence and rebuttable legal presumptions

Proving that an AI system caused damage is extremely difficult. It is even harder to prove if a defect has caused the damage. The limited predictability coupled with a lack of transparency stemming from the algorithms' autonomous learning capabilities may hinder the establishment of a causal link with the damage. The Revised PLD contains a provision on the disclosure of relevant evidence and a presumption of causality, comparable to the AI Liability Directive. Failure by the defendant to disclose requested evidence when presented with sufficient evidence to support the plausibility of the claim for compensation will, unlike under the AI Liability Directive, not result in presumption of non-compliance with the duty of care but in a presumption of defectiveness. Defectiveness will also be presumed when the claimant provides evidence that the product does not comply with mandatory safety requirements set in EU or national law, or when the claimant establishes that the damage was caused by an ‘obvious malfunction' of the product during normal use or under ordinary circumstances. The presumption of causality applies if the product is defective, and the damage caused is of a kind typically consistent with the defect in question.

General presumptions of defectiveness and/or causation can be assumed when the claimant is faced with excessive difficulties in providing evidence, due to the technical or scientific complexity thereof. Claimant must demonstrate that the product contributed to the damage and likelihood of defectiveness and/or causation. Given the presumable high threshold of such technical or scientific complexity, national courts will likely apply such general presumptions in the context of AI systems.

Defences available to economic operators

Existing causative liability models work well when machine functions can be traced back to humans. In machine learning systems this is usually very difficult to achieve, especially since computer scientists are often unable to determine how or why (on a human explicable basis) a machine learning system has made a particular decision. The current state of the art does not provide for systems to self-report their decisions, yet. Economic operators could be forced to interject explanation systems into their AI solutions where their decisions are likely to have significant regulatory or human impact, such as healthcare or financial services. When such practices become part of the objective state of scientific and technical knowledge, explanation systems will become a precondition to the state-of-the-art defence of Article 10 Revised PLD.

Defences allowing economic operators to escape liability have been widened in parallel to the scope of the Revised PLD. The notion of putting products into circulation now extends to the control of the manufacturer in the state-of-the-art defence and the later defect defence. Substantial modifications of a product outside the control of the original manufacturer, are solely attributable to the ‘modifier' to the extent that the defectiveness that caused the damage is related to a part of the product affected by the modification. Manufacturers of defective components are exempt from liability as a result of the design of the product or instructions given by the manufacturer of the product. In this respect, it must be noted that two or more economic operators liable for the same damage can be held liable jointly and severally.

Conclusion

Both the AI Liability Directive and Revised PLD aim to ameliorate the position of the injured person. A balance is struck between on the one hand, the protection of consumers and, on the other hand, innovation with rebuttable presumptions to obtain compensation while laying down guarantees for new technologies, especially AI. While the AI Liability Directive effectuates a non-contractual fault-based civil law claim for compensation of damage inflicted on natural persons and legal entities caused by (the failure of) an output of an AI system, the Revised PLD effectuates a claim for material losses sustained by natural persons in a number of circumstances as a result of new and traditional products.

A number of questions that have not been addressed in this contribution still remain, such as whether unforeseen deviations in the decision-making process of AI systems which embed self-learning capabilities can be characterized as defects or authorised modifications of the product by the manufacturer.

Companies active in the field of new technologies, ubiquitous interconnectivity and AI applications will need to transform their business to the digital age and circular economy with full knowledge of these risks and potential liability issues. Navigating through the parallel liability regimes may prove to be difficult. They must not await implementation of the proposed legislation and would do good to, amongst others, perform a due diligence of their supply chain.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.