ARTICLE
29 May 2023

Legal Liability Of Artificial Intelligence

KC
Kilinc Law & Consulting

Contributor

Kilinç Law & Consulting established by Levent Lezgin Kilinç currently operates in Istanbul, Izmir and London. Our firm, provides services to clients in a wide range of complex matters including Project Finance, Corporate Law, M&A, Energy Law, Dispute Resolution, Maritime Law, IP Law, International Transactions as well as Litigation of the disputes.
In addition to artificial intelligence products that have gained popularity in recent years such as driverless cars, robot vacuums, and Siri, the risks arising from the introduction...
Turkey Technology

INTRODUCTION

In addition to artificial intelligence products that have gained popularity in recent years such as driverless cars, robot vacuums, and Siri, the risks arising from the introduction of applications such as ChatGPT and AI, which are defined as generative artificial intelligence, have raised questions about the legal nature and legal liability of artificial intelligence, the use of which has become a part of daily life.

Artificial intelligence, which is characterized as autonomous, is a concept that has created great interest and excitement in recent years, and this rapid progress and widespread use has raised a number of important questions about the legal nature of artificial intelligence. Having a clear understanding of the legal status, responsibilities and rights of AI systems is important for understanding the ethical, social and legal dimensions of this technology.

The increasing complexity and capability of AI technology raises important questions about legal liability. In the event of mistakes, wrong decisions or adverse outcomes made by AI systems, issues such as who is responsible and how to legally address them lead to challenging debates on determining the legal liability of AI. In this article, an assessment will be made within the framework of current debates on the legal liability of artificial intelligence.

LEGAL NATURE OF ARTIFICIAL INTELLIGENCE

Although artificial intelligence systems do not have human-like mental abilities, the fact that artificial intelligence systems can make decisions, take actions and produce results in some cases has increased the question marks regarding the legal liability of artificial intelligence, and it is clear that in order to evaluate the legal liability of artificial intelligence, it is necessary to first determine what the legal nature of artificial intelligence is.

Although there are many opinions on the legal nature of artificial intelligence in the doctrine, the most predominant opinions are gathered in 5 main opinions; (i) artificial intelligence has the quality of property, (ii) artificial intelligence has the nature of person, (iii) artificial intelligence has legal entity, (iv) artificial intelligence has electronic entity and (v) artificial intelligence has the quality of work.

Considering these opinions, the ever-evolving capabilities of artificial intelligence and the results it can achieve due to its capabilities, it is clear that the need to attribute a legal personality to artificial intelligence in order to determine the legal liability of artificial intelligence is undeniably important, but we believe that mere definitions such as person, thing or work will not be sufficient for artificial intelligence. For this reason, as stated in the European Parliament's Report dated 27.01.2017, we agree with the view that a new electronic entity should be created for artificial intelligence, which is different from a person and a legal entity. In this regard, although it is difficult to evaluate the characteristics of this type of personality, which the legal world has encountered for the first time, considering the innovations brought by artificial intelligence technology, we believe that creating a legal basis through a new definition instead of the definitions that are already in the legislation is a necessary step in order to provide more effective solutions to legal problems.

LEGAL LIABILITY OF ARTIFICIAL INTELLIGENCE

As we have mentioned in this article, it is clear that it would not be a practical solution to make an assessment regarding the legal liability of artificial intelligence technology, which has not yet formed a common opinion in the doctrine regarding its legal nature. However, recent developments, such as the blocking of access to artificial intelligence technology by some countries due to the concern that it poses a "danger" in terms of infringement of rights, have led to an increase in discussions on how and by whom the damages caused by artificial intelligence technologies will be compensated.

As it is known, in order to attribute legal responsibility to the result, the thing that produces this result must be a real or legal person, and it is not possible to attribute responsibility to the behavior of entities that do not have a legal personality. However, the high-speed development of artificial intelligence has created the need to determine a personality for this technology and the need for legal regulation that can be held legally responsible as a result of this determined personality. In addition, the difference in knowledge and skills between artificial intelligences also changes the nature of the products or services created by these technologies, so therefore, it has become more difficult to establish a legal norm to be applied on behalf of all artificial intelligences.

In this article, we present the predominant views discussed in the doctrine, although there is not yet a common view on all aspects regarding who, what and how liability will be attributed to artificial intelligence for the acts of artificial intelligence in national or international law.

a. Manufacturer's Liability Opinion;

As we have stated in this article, there is an opinion that if artificial intelligence is characterized as a "product", the damage caused by the said product may be evaluated within the scope of the producer's liability. When the artificial intelligence technology is examined and in the provisions of Article 2 of the Council of Europe Directive on Producer's Liability ("Directive"), it is qualified as a "product" and with the Product Safety and Technical Regulation Law No. 7223 ("Law Numbered 7223"), it has been concluded that artificial intelligence technology can be qualified as a product with the qualification of intangible goods as products. In this respect, it is clear that there is a parallelism between the Directive and the Law Numbered 7223, and although there is a predominant opinion that the manufacturer may be held liable for the damages caused by artificial intelligence, there is no specific regulation on liability for the time being. However, since the provisions of the Directive and the Law Numbered 7223 generally adopt strict liability, it is very important that there is a causal link between the defect in the product and the damage incurred in order to hold the manufacturer liable. Although it is not easy for the user to identify the defect in artificial intelligence technology, considering the characteristics of the technology, it is clear that in order to eliminate this uncertainty, the legal nature of artificial intelligence should be characterized as "product" and then clarity should be provided by regulating special provisions on liability.

b. Fault Liability Opinion;

The common idea of the Directive and the Law Numbered 7223 is united in the concept of strict liability, and as we have stated before, what is important in the concept of strict liability is that a causal link can be established between the damage and the defect. We would like to emphasize that artificial intelligence technologies have different characteristics, and in this context, there is a difference in the application of the view of strict liability in the damages caused by high-risk artificial intelligence systems, which are also characterized as autonomous, and non-high-risk artificial intelligence systems, which are characterized as automatic.

The difference between the systems that occur in connection with the level of sophistication of artificial intelligences is also stated in the draft regulation published by the European Commission in 2020 ("Regulation Proposal 2020") and regulation draft published in 2021 ("Regulation Draft 2021"). In Regulation Draft 2021, artificial intelligence systems are evaluated in 4 different categories depending on the sectoral aspect and the level of sophistication of the systems, and these are defined as; (i) unacceptable risk, (ii) high risk, (iii) limited risk and (iv) minimum risk. 21.04.2021 dated Artificial Intelligence Regulation Proposal ("Proposal"), adopting a risk-based approach is appropriate in terms of the growing role of artificial intelligence in daily life, but the Proposal does not provide clarity on who will be responsible according to the types of risks.

There is a belief that the issue of strict liability in artificial intelligence systems, which is discussed in both national and international doctrine, will be applied in artificial intelligence systems that are not high-risk in the Regulation Proposal 2020. However, there is no significant data on how the view of fault liability and how fault liability will be integrated with both the Turkish Code of Obligations No. 6098 and the legislation of the countries in this regard. For this reason, we believe that the precedent decisions to be made all over the world regarding the legal nature and liability of artificial intelligence systems and the entry into force of the Proposal will guide how the new regulations to be made on this issue will be shaped.

CONCLUSION

Artificial intelligence is rapidly developing as a technology that can display human-like abilities through the use of advanced algorithms and big data, and this rapid development raises many legal questions.

Due to the fact that legal regulations are lagging behind in keeping up with the growth rate of artificial intelligence systems, the issue of where artificial intelligence will be placed on the legal ground and how and to whom it will be recourse in terms of legal liability has become more and more important. However, the lack of a common definition and legal regulation within the scope of national and international legislation creates the necessity to proceed through doctrinal discussions on the legal nature and liability of artificial intelligence. While there remains uncertainty as to how the Directive's view of producer's liability or the European Parliament's view of strict liability will be integrated into domestic law, it is clear that the pace of development of artificial intelligence is accelerating at a faster than expected rate and the need for a legislative framework is increasing as it is increasingly used by users.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More