Legal Warranties of Quality

Pursuant to the Civil Code of Québec (CCQ)1, the seller is bound to warrant the buyer that the property and its accessories are, at the time of the sale, free of latent defects which render it unfit for the use for which it was intended or which so diminish its usefulness that the buyer would not have bought it or paid so high a price if he/she had been aware of them. In a sale by a professional seller, a defect is presumed to have existed at the time of the sale if the property malfunctions or deteriorates prematurely in comparison with identical property or property of the same type.2

Considering the applicable presumptions, which often lighten the burden of proof on purchasers, and the generality of the terms used in the CCQ as to the scope of the warranty against latent defects, the application of these principles to incidents involving AI systems is likely to raise a number of contentious issues, particularly with respect to causation, the nature of the uses for which the AI system was intended, and what may be considered improper use by the purchaser, which may exonerate the seller from liability in whole or in part.

While these issues are not unique to AI systems, the specific characteristics of AI systems, including their ability to learn and act autonomously and sometimes unpredictably, are likely to raise new evidentiary and legal issues. For instance, if purchasers attempting to claim damages for a latent defect related to the malfunction of an AI system wish to invoke the presumptions provided for by the CCQ, they will have to prove that the incident was caused by a malfunction of the AI and that that defect manifested itself prematurely. In all likelihood, meeting these criteria in relation to complex AI systems will require specific and detailed expert evidence. This will be especially true if the courts start applying product liability principles to software with built-in AI.

The legal warranty of quality is evaluated in relation to the use for which the property was intended. Unless the seller is aware of the buyer's particular intended use, the courts consider the "normal" use of the property. This will surely raise questions for goods incorporating complex AI systems that enable them to perform various tasks based on what is asked of them by users and the data provided to them. For example, what would be the "normal" use of a software application like ChatGPT? In general, it will be necessary to review the contractual documents, including the terms and conditions of purchase or service, to understand the applicable representations and limitations of the AI system's features, including its level of autonomy, terms of use, and appropriate maintenance requirements.

Presumed Knowledge of Latent Defects and Development Risks

While sellers of AI systems will surely seek to limit their liability by including limitation of liability clauses and making the user responsible for supervising the system's actions and correcting its errors, the legal value of such limitations will likely be subject to challenges. Indeed, it must be recalled that under Québec law, in no case can a professional seller limit its liability for defects of which they were aware or could not have been unaware.3

In accordance with the principles established by case law in the wake of ABB v. Domtar,4 professional sellers are presumed to be aware of the defects in their products and their lack of knowledge generally constitutes a fault in itself.5 The manufacturer may rebut the presumption only by showing that it was unaware of the defect and that its ignorance was justified.6 In the case of products sold to consumers, merchants and manufacturers simply cannot claim ignorance of the defect as a defence in a latent defect action.7

To date, ignorance of a defect has very rarely been successfully asserted by manufacturers or specialized professional sellers. For AI systems used in a commercial context that are not intended for consumers, AI developers could potentially claim ignorance of a defect that is discovered after their system is released. However, since manufacturers are presumed to verify the quality of the products they put on the market, the courts will likely not easily side with manufacturers who invoke this defence.8 A very specific exception is made for a development risk that no one could have known about when the product was put on the market. In view of the novelty and complexity of AI systems, the applicability of this exception is plausible, but it is a safe to assume that the courts will consider the steps taken by the defendant to test the system before and after it is release to market before exonerating an AI developer from liability for damages arising from product or system defects. This is particularly true since AI entrepreneurs have already expressed concern about deploying AI systems that have not been sufficiently tested to ensure that the risks associated with their use have been identified and can be controlled,9 even though there is still no specific regulatory framework for AI system development and marketing activities.

Liability for the Autonomous Act of a Thing

We have discussed the obligations of developers and sellers of AI systems, but what of the liability of users? In the context of a claim against the seller of a property with a built-in AI system, the user's negligence in using or supervising the AI system may constitute a valid defence or a contributory fault that may result in shared liability.

In addition, the operator may also be liable for damages caused by a property with a built-in AI system if he/she acts as "custodian" of the property in question. Pursuant to the CCQ,10 the custodian of a thing is bound to make reparation for injury resulting from the autonomous act of the thing, unless he/she proves that he/she is not at fault.

Liability for the autonomous act of the thing is subject to two specific conditions: the absence of direct human intervention in causing the injury, and the mobility or dynamism of the thing that caused the injury. Although almost all of the relevant case law concerns the autonomous act of physical things, the concept of "thing" covered by this article is broad and includes all movable, immovable, tangible and intangible property.11 As with legal warranties of quality, it is therefore possible, if the reasoning of the court in the Fortnite decision cited previously was to be followed, that the liability regime for the autonomous act of a thing could find application not only to AI systems embedded in physical goods but also to AI systems embedded in software.

The concept of "custodian" of a thing that contains an AI system is also likely to raise interesting questions. According to the jurisprudence, the custodian is the person that, at the time the damage was caused, had a power of supervision, direction, command and control.12 Custody of a thing is different from mere physical possession. The holder of a thing is not necessarily its custodian if he/she can exercise only limited control over it.13 In the case of a physical thing that contains an AI system, it appears more obvious that the custodian of the thing will be the user if he or she has some control over the thing and the features of the AI system (e.g., self-driving car). However, in the event one considers the autonomous act of software, determining the identity of the custodian of the "thing" could prove to be far more complex and require an assessment of the totality of the circumstances, including the level of supervision, direction, command and control held by the various actors.

We note that the Autonomous Bus and Minibus Pilot Project14 requires that the driver of an autonomous bus in motion "remain continuously attentive to events likely to affect road safety in order to be ready to intervene rapidly at any time in taking over control of the vehicle's automated system, immediately taking over the driving of the vehicle or adapting driving to the circumstances." This may suggest that users will generally not be able to avoid the obligations they would otherwise have by claiming an AI system malfunctioned if they failed to exercise due diligence or adequately monitor the AI's autonomous activities, particularly for AI systems used in risky and already highly regulated environments such as self-driving cars.

Conclusion

At least for the meantime, it appears that the courts will have to rule on civil actions involving AI systems using the existing legal framework for civil liability in Québec. Indeed, and despite the expected enactment of the Artificial Intelligence and Data Act (AIDA), until Québec's legislature passes new laws or amends existing laws to provide specific rules regarding the civil liability of developers, sellers, operators and users of AI systems, courts will be called upon to apply and adapt the current civil liability regime to claims involving AI systems.

While existing principles of product liability set out in the CCQ and the Consumer Protection Act (CPA) should apply to disputes involving physical products with built-in AI systems without too much difficulty, the courts will likely be called upon to decide novel issues resulting from the unique characteristics of AI systems, including their ability to perform various tasks autonomously. Moreover, such disputes are likely to raise complex factual issues, including the causal link between the operation of an AI system and the damages and the level of care exercised by the user.

It is far more difficult to predict the legal framework that will apply in civil litigation involving AI systems embedded in software, particularly if such software were to be considered a "property" following the reasoning adopted by the Superior Court in the Fortnite decision. Until now disputes involving software have usually been governed by the general principles of Québec civil law in matters of contractual liability. However, the Fortnite decision is a sign that courts may be open to applying the principles of product liability, including legal warranties against latent defects, the manufacturer's liability for safety defects, and the custodian's liability for the autonomous act of a property, in ruling on such disputes. As those principles were generally developed for disputes involving physical goods, such an eventuality would raise a host of new legal issues which would have to be clarified by the courts or the legislature.

Government initiatives regarding the responsibilities of AI system developers will continue to be closely monitored, particularly for uses of AI systems that involve high risks due to their possible impact on the fundamental rights of third parties (privacy, discrimination, etc.) or the safety or health of users and the public. Close attention should also be paid to the developing case law regarding the characterization of software, including AI systems used in software, as "property" within the meaning of the CCQ. In this regard, we note that the defendants' application for leave to appeal the Fortnite decision was recently denied, which means that the class action will proceed to the merits. This decision could very well inspire similar litigation involving other video games or other types of popular digital services in the years to come. In fact, an application for permission to bring a class action was recently filed on January 24, 2023, against Meta, Facebook and Instagram alleging that they failed to warn Facebook and Instagram users of the risk of developing an addiction to those services.15

In the meantime, AI system developers and sellers can seek to minimize the risk of claims and litigation by clearly disclosing the system's features, its limitations, instructions of use, monitoring and maintenance, and the risks associated with its use and how to guard against them in their contractual documents and by including appropriate limitation of liability and indemnity clauses in such documents.

Footnotes

1. Article 1726 CCQ.

2. Artcile 1729 CCQ.

3. Article 1728 CCQ.

4. 2007 SCC 50.

5. Deguise v. Montminy, 2014 QCCS 2672 (CanLII), paras. 1114-1116.

6. ABB v. Domtar, 2007 SCC 50, para. 69; CCI Thermal Technologies Inc. v. AXA XL (XL Catlin),2023 QCCA 231, para. 44.

7. CPA, s. 53.

8. Imperial Tobacco Canada ltd v. Conseil québécois sur le tabac et la santé, 2019 QCCA 358 (CanLII), para. 295.

9. https://www.lapresse.ca/affaires/techno/2023-03-29/intelligence-artificielle/yoshua-bengio-et-un-millier-de-personnalites-demandent-une-pause-de-six-mois.php

10. Article 1465 CCQ.

11. Québec (Ville de) v. Équipements Emu ltée, (C.A., 2015-08-17), 2015 QCCA 1344.

12. Société d'assurances générales Northbridge v. 9180-2271 Québec inc. (Restaurant Pizzicato), 2014 QCCS 1304.

13. Pétroles Cadeko inc. v. 9166-0357 Québec inc., (C.S., 2021-08-30), 2021 QCCS 3774.

14. Autonomous Bus and Minibus Pilot Project, CQLR v. C-24.2, r. 37.01, s. 13.1.

15. See the application for permission to institute a class action filed by Alexia Robert in the Superior Court of Québec (Court file number 500-06-01217-237).

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.