Legal Liability For AI Decision Making

GP
Guleryuz Partners

Contributor

We are Güleryüz Partners, an Istanbul based law firm, offering high-quality legal services to domestic and multinational clients. Our team consists of energetic young professionals led by talented partners with strong academic backgrounds at prestigious universities in the USA, UK, and Germany, coupled with vast market experience exceeding a decade at top tier Turkish law firms. Our practice ranges from complex disputes to sophisticated M&A and finance transactions. We provide niche legal services in a wide range of legal areas such as litigation and dispute resolution, local and cross border M&As, banking, finance and capital markets, venture capital investments and start-ups, and compliance and corporate governance. We heavily invest in our pro bono projects in Turkiye and work together with institutions, foundations, and other organizations to provide legal advice to the persons in need of help. We also pride ourselves on fostering and promoting a diverse, equitable and inclusive work environment.
With the most recent technological developments, Artificial Intelligence ["AI"] and related technologies are being deployed by governments and businesses alike in a...
Turkey Technology

With the most recent technological developments, Artificial Intelligence ["AI"] and related technologies are being deployed by governments and businesses alike in a wide spectrum of sectors. 1 With applications of AI increasing exponentially in every possible aspect of society, there is no doubt an accompanying aspect of risk, which is nearly impossible to measure. In this article, we try to focus on the possible legal ramifications and liability risks associated with AI decision-making.

While there is no widespread agreement on a set definition for AI, it can most simply be defined as "the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings". 2 Today, AI programs are utilised in a plethora of areas in varying degrees. Given enough data and an accurate initial program, an AI can be used to identify faces, predict future criminality, or carry out seemingly more innocuous tasks like driving a car or providing the results for your internet search.3

Risks Associated with AI Usage and Liability

Given its widespread usage in nearly every aspect of our society, AI, especially data-driven decision-making and profiling algorithms, poses a real and serious risk against the exercise of legally recognised rights. Hostile and malicious applications of AI technologies raise concerns of freedom of expression, right to privacy and prohibition of discrimination 4, while unintentional errors in daily AI decisions may cause serious bodily or monetary harm in cases like autonomous vehicles and medical devices.5

 Taking into account that AI decisions are not ones made in an informational vacuum, but ones that exist in relation with many other actual and technological phenomena, the question of legal liability for AI actions becomes increasingly difficult to answer.6 In the world of AI and big data, there is an endless stream of data exchange and an infinitely complex chain of relationships between machines and humans, which in turn makes it that much harder to identify the responsible party for any given risk that realized. While it is sure that specific regulation about AI usage is to be made in the close future, the essential ethical and legal questions on our method of liability allocation should be asked and answered before taking legislative action.7

As scholar Peter Cane noted:

"Responsibility is not just a function of the quality of will manifested in conduct, nor the quality of that conduct. It is also concerned with the interest we all share in security of person and property, and with the way resources and risks are distributed in society. Responsibility is a relational phenomenon."8

Therefore, it is impossible to answer the question of liability allocation in a straightforward and standardised way, as the answer will inevitably be linked to every separate case in hand.9 With this in mind, as is the case with other, more conventional, conduct that is penalised in society, it would be a mistake to expect a single method of liability allocation to apply to each and every risk associated with the usage of AI and related technologies.10

Can AI Systems be Held Liable for Their Own Decisions?

The first question that comes to mind for many of us about the responsibility arising from the use of artificial intelligence technologies is whether these systems, which seem to make decisions on their own, without any human influence, should be held accountable for the consequences of their own behaviour.

Persons that can be held legally responsible, "legal subjects", primarily consist of natural persons as in human beings. However, as a result of legal developments since the middle-ages, some structures that are not actual people can be recognized as legal subjects. According to some scholars, it can be said that the recognition of public and private institutions as "legal entities" is one of the cornerstones of the development of European Law.11

Today, all around the world, legal entities as well as natural persons are seen as legal subjects. However, these legal entities are actually the sum of people who act consciously for a certain purpose and can also explain the purposes of these actions. On the contrary, artificial intelligence systems used today generally fall under the category of "narrow artificial intelligence", that is, they consist of systems equipped with the ability to make decisions in a very specific area. Therefore, since it is not possible to state that these systems act consciously and independently, it can be said that they can only be legal objects and not subjects.12 While the discussion can once again arise if and when a conscious and general artificial intelligence system is produced, today it does not seem possible to hold artificial intelligence systems responsible for their own decisions.

What is the Legal Nature of an AI?

Now we know that artificial intelligence systems cannot be held responsible for their own actions and decisions and can only qualified as legal objects, the question of who will be held liable for possible damages arises. An important aspect of the question lies on the fact that artificial intelligence systems pass through the hands of many people such as software developers, producers, and data providers before they come into use.

While seeking an answer to the question of which of these people will be held liable for possible damages, first of all, the legal nature of artificial intelligence as a legal object should be addressed, and then the responsibility should be assigned according to the conditions in which the damage occurs.

Today, while the legal nature of artificial intelligence is a point of disagreement in European Law, the dominant view is that artificial intelligence systems that are in general use are in fact a "product" offered to end users and other manufacturers.13 Although there are contrary opinions, artificial intelligence systems can be evaluated as a product in the context of Article 2 of the Product Liability Directive 85/374, adopted by the Council of Europe [the "Directive"] and therefore can be evaluated in the context of "product liability" according to the Directive.14

Among the discussions on the applicability of the Directive to artificial intelligence, a Regulation Proposal was published by the European Union in April 2021 to regulate legislation on artificial intelligence. The Proposal regulates many problems related to artificial intelligence but does not contain any regulation on liability.15 Another regulation is expected to be made regarding the question liability arising from artificial intelligence and related technologies at the end of 2021 or at the beginning of 2022.16

In Turkish law, the concepts of "products" and "producers" are not fully regulated under the Turkish Code of Obligations. "Intangible goods", which arguably include AI technologies, were first included as "goods" in an amendment to the Customer Protection Act No. 4077 and the Regulation on Liability for Damages Caused by Defective Goods in 2003. After the abolition of this law and a long period of legal gap, Product Reliability and Technical Regulations Act No. 7223 ["PLTRA"] entered into force on 12 March 2021. In this act, intangible goods and by extension AI systems were classified as "products".17

Who Can be Held Liable for AI Decisions?

 AI is classified as a product both according to the Directive and the newly adopted PLTRA in Turkish law. Therefore, liability for possible damages should be assessed in accordance with this classification.

The Directive and PLTRA are compatible in many respects. Most importantly, in both regulations, it has been stipulated that the manufacturer and the importer, along with the distributor who is secondarily responsible, will be jointly liable for the damages suffered by the user and non-user third parties, for any damage caused by a defect in the product.18 Another important aspect observed in both regulations is that the manufacturers, importers, and distributors are held liable on the basis of strict liability. For strict liability to arise, the manufacturer does not need to be in fault, and can be held liable regardless of their conscious decisions or their awareness of the defect.19

Since the Directive and PLTRA adopt the principle of strict liability, it is sufficient to prove that the product is defective, the damage occurred and the causal link between the defect in the product and the damage in order for the manufacturer to be held responsible. In Turkish law, it is argued that the burden of proof regarding the defectiveness of the product should be interpreted as narrowly as possible. Even in decisions adopted prior to the PLTRA, the Court of Cassation ruled that, "Due to the complex nature of manufacturing, it will be impossible for the injured person to prove some issues, therefore the presumption of defect should be accepted as proof."20 Therefore, the damage caused by the product constituted proof in itself that the product was defective. Especially since artificial intelligence systems consist of very complex software, algorithms and databases, it may be almost impossible for the injured person to prove that the artificial intelligence that causes the damage is defective. While the PLTRA has not yet been applied broadly due to its recency, this line of reasoning will prove to be healthier for its application.

How Can Manufacturers Avoid Liability?

While manufacturers are strictly responsible for the damages caused by artificial intelligence systems as a rule, there are also situations where this responsibility will be eliminated both within the scope of PLTRA and the Directive. However, the two regulations, which are largely compatible on the establishment of responsibility, differ significantly in exceptions to the rule. According to PLTRA, the manufacturer is relieved of liability in cases where they prove that they did not put the product on the market, that the defect was caused by the distributor's or the user's intervention or that the defect in the product was caused by the production in accordance with technical regulations or other requirements. Also, even if the defect itself is not caused by the user, if the damage is caused partly by the user, the manufacturer's responsibility may be partially or completely removed.21

According to the Directive, in addition to the reasons listed in the UGDTK, the manufacturer is also relieved of responsibility if "it is not possible to recognize the defect in the product according to the level of science and technique at the time of the product's release". This provision, which is the most common reason for relief from responsibility, is frequently criticized. Since this defence can be applied especially in products using new digital technologies such as artificial intelligence, many believe that a special regulation for such products is necessary.22

Can AI Decisions Result in Criminal Responsibility?

Legal liability for material and moral damages caused by artificial intelligence decisions can be largely resolved through interpretation by applying product liability law as explained above. However, it is almost inevitable that a criminal investigation will accompany the damages, especially if the decision causes death or injury.

The concept of "intent" forms the base of criminal law and focuses on the voluntariness of the action. Intention based liability only arises when (i) the agent of the action made a free and voluntary decision to act in the way that generates liability, and (ii) the agent has knowledge surrounding their action and its harmful consequences. In Turkish law, as frequently seen with other jurisdictions, criminal liability can only arise from intention as a rule. However, some crimes such as manslaughter and reckless injury can be committed without intent.23

Without a doubt, the actions of human beings who develop and deploy an AI program with the intent of inflicting monetary or bodily harm on other human beings would constitute intent, in turn making criminal prosecution or civil claims for damages against them possible.24 However, crimes of negligence are commonly observed in AI-related fields. An unfortunate event in 2018 proved to be an excellent case study for this question when an AI-powered self-driving car hit and fatally injured a woman in the USA. In the aftermath, the human operator in the vehicle ended up being prosecuted for negligent homicide.25 Therefore, in contrast to legal liability, the manufacturer would be directly held liable in criminal cases, and every case should be examined separately.

Conclusion: What Should Manufacturers and Users be Aware of?

In the light of the foregoing explanations, there are not yet concrete and specific regulations on the determination of legal and criminal liability arising from artificial intelligence decisions. However, existing legal institutions can provide guidance in this regard and provide solutions to possible problems. Manufacturers, who are the primary beneficiaries of artificial intelligence technologies, are also the primary bearers of liability and should ensure that the artificial intelligence-based software and systems they put on the market will not make decisions that may cause damage as they will be primarily responsible for the damages caused by a possible false artificial intelligence decision, even if they are not at fault. Users should also take utmost care and avoid actions that may be interpreted as user fault in a possible accident, especially when using advanced systems that may cause great damage in case of a possible error such as self-driving cars.

Footnotes

1.Forbes, How Is AI Used In Healthcare - 5 Powerful Real-World Examples That Show The Latest Advances https://www.forbes.com/sites/bernardmarr/2018/07/27/how-is-ai-used-in-healthcare-5-powerful-real-world-examplesthat-show-the-latest-advances/?sh=532205675dfb 

2. Brittanica, Artificial Intelligence https://www.britannica.com/technology/artificial-intelligence 

3. GIUFFRIDA, Iria, Liability for AI Decision-Making: Some Legal and Ethical Considerations, Fordham Law Review 88 (2019) p. 441.

4. YEUNG, Karen, Responsibility and AI, Council of Europe Study DGI(2019)05, pp. 28-43.

5. DEMPSEY, James X., Artificial Intelligence: An Introduction to the Legal, Policy and Ethical Issues, Berkeley Center for Law & Technology, 2020, pp. 10-14.

6. GIUFFRIDA, p. 442.

7. GIUFFRIDA, p. 456.

8. CANE, Peter, Responsibility in Law and Morality, 2002, p. 109.

9. YEUNG, p. 56.

10. YEUNG, p. 56.

11. HILDEBRANDT, Mireille, Human Law and Computer Law: Comparative Perspectives, 2013, p. 37.

12. HILDEBRANDT, p. 37.

13. SARI, Onur, Yapay Zekânın Sebep Olduğu Zararlardan Doğan Sorumluluk, TBB Journal 2020 (147).

14. Directive 85/374/EEC, 25 July 1985.

15. Commission Proposal 2021/0106 (COD), 22 April 2021.

16. European Commission, A European approach to Artificial intelligence, https://digitalstrategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence  Last accessed on 10 May 2021.

17. KANIŞLI, Erhan, Ürün Güvenliği ve Teknik Düzenlemeler Kanunu (ÜGTDK) Uyarınca Üreticinin Sorumluluğu, İstanbul Law Journal 78 / 3 (December 2020), p. 1432.

18. KANIŞLI, p. 1422.

19. CANE, p. 82.

20. Court of Cassation 4th Legal Division, 1994/6256 E., 1995/2596 K., 27.03.1995 T.

21. PLTRA Art.21.

22. KANIŞLI, pp. 1423-1425.

23. Turkish Criminal Code Art.22.

24. Turkish Criminal Code Art.22.

25. BBC, Uber's self-driving operator charged over fatal crash https://www.bbc.com/news/technology-54175359 

Originally published 09 September 2022

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More