- within Technology topic(s)
- in South America
- within Technology, Privacy and Consumer Protection topic(s)
- with readers working within the Banking & Credit industries
1. Introduction
Legal systems have historically been built upon a human-centric conception of liability grounded in fault, causation, and intent. By contrast, artificial intelligence models operating at varying levels of autonomy—from decision-support systems categorized as soft AI, to hard AI approaches capable of optimizing their own objectives without human intervention, and to generative models that can independently produce content—have complicated the question of whether harmful outcomes should be attributed solely to human conduct or to the independent functioning of these algorithmic structures. These differing categories introduce new debates, both technical and legal, regarding the allocation and determination of liability.
Different legal systems attempt to formulate different answers to this question. The European Union, with its risk-based approach, distributes liability among the actors who develop, supply, and deploy AI models, whereas in the United States, supervisory and transparency obligations are placed at the forefront. What both systems have in common is the view that artificial intelligence is not an independent legal subject, but rather a technical extension of human decision-making.
Under Turkish law, existing rules are likewise based on principles of intent and fault, making it impossible to attribute liability directly to an AI system. However, the rapid evolution of technology increasingly necessitates a reinterpretation of existing legal institutions and requires new assessments in the fields of tort and criminal liability within contemporary legal practice.
2. Legal Liability of Artificial Intelligence
2.1 General Principles under Turkish Law
Under Article 49 of the Turkish Code of Obligations, any person who unlawfully and culpably causes damage to another through a wrongful act is obliged to compensate for that damage. Within this framework, liability is premised on the voluntary conduct of the actor. Therefore, artificial intelligence models, which, under current definitions, lack intent and legal personality, cannot be considered direct tortfeasors under the existing legal framework.
From a legal point, artificial intelligence currently functions solely as an instrument; responsibility for any harm it causes rests with the person who controls, operates, or develops it.1 Currently, Article 66 on employer liability and Article 71 on risk liability under the Turkish Code of Obligations emerge as the most applicable provisions for assessing AI-related damages. In particular, where autonomous systems operate as part of an enterprise's activities, any resulting harm must be attributed to the owner of the enterprise.2
Although there is no direct precedent regarding artificial intelligence under Turkish law, the assessment of liability is likely to be carried out by analogy within the framework of these provisions. In the context of Article 66 of the Turkish Code of Obligations, an AI system may function as an auxiliary person employed within the operation of an enterprise; accordingly, the system's actions may be attributed to the person who bears the duty of supervision and oversight within that organizational structure. As for risk liability, the concept of a significantly hazardous enterprise activity regulated under Article 71 can, by analogy, be applied to the high-risk operational characteristics of autonomous systems. In such a scenario, damages arising from the operation of artificial intelligence would be imposed on the owner of the enterprise, irrespective of fault.3
Within this framework, liability under Turkish law is not attributed directly to artificial intelligence itself; rather, it is based on the harm arising from activities carried out under human control and supervision. Although there is not yet a specific statutory regulation or judicial precedent, the existing provisions indicate that liability related to artificial intelligence is assessed indirectly and within a human-centered evaluative structure.
2.2 Debates on Legal Personality
Under Turkish law, legal personality is a status reserved exclusively for natural persons and for legal entities recognized by statute. Legal capacity is linked to the notion of being capable of holding rights and obligations. The capacity to act, however, requires abilities such as expressing intent, directing one's own conduct, and foreseeing the consequences of one's actions—inherently human competencies. For this reason, artificial intelligence models, which lack the capacity to determine their own actions on the basis of intent or consciousness, cannot be classified as natural or legal persons within the current legal framework.
Artificial intelligence models are, within the current technical and normative framework, regarded solely as instruments in the eyes of the law. These models are not capable of holding rights or obligations on their own and can produce legal effects only through the individuals who program, operate, or utilize them.4 Owing to the fundamental structure of the person– property distinction in the Turkish legal system, granting legal personality to artificial
intelligence would disrupt this foundational separation and could lead to unpredictable consequences in areas such as asset ownership, representation, liability, and sanctions.5
In Roman law, slaves were classified legally as "res" despite being capable of engaging in voluntary conduct in practice; nevertheless, liability was always directed toward the patrimony of the master. This structure bears a conceptual resemblance to the contemporary treatment of artificial intelligence as an instrument rather than an autonomous actor.6 For this reason, the prevailing view in the doctrine is that artificial intelligence should not be regarded as a legal subject, but rather as a tool operating under human supervision and control.
The concept of "electronic personality," raised in the 2017 Report with Recommendations to the Commission on Civil Law Rules on Robotics of the European Parliament,7 was later not adopted in the 2020 assessment published by the European Commission. It was emphasized that such a status would be incompatible with the existing legal structure and could complicate the determination of liability relationships. In Turkish scholarship, a parallel view is maintained: the recognition of legal personality for artificial intelligence is considered premature, and even impossible, from the standpoint of legal technique; moreover, even if such a status were deemed theoretically conceivable, it would undermine the human-centered system of liability that forms the core of the legal order.
3. Criminal Liability of Artificial Intelligence
Under criminal law, a perpetrator is, by definition, a human being possessing freedom of will. The status of the perpetrator cannot be attributed to autonomous systems such as artificial intelligence within the framework of the Turkish Penal Code. Accordingly, it is not legally possible under the current system to impose direct criminal liability on an AI system for harm it causes.
That said, negligence or recklessness liability may arise for individuals who fail to take necessary safety measures despite being aware of the foreseeable risks of an artificial intelligence model, who do not remedy algorithmic defects or who neglect their duty of supervision. In this context, criminal responsibility is grounded in the breach of the obligation to assess the system's foreseeability and to implement the precautions required to prevent harm.
According to the doctrine, it is suggested that the concept of objective attribution may serve a functional role in determining criminal liability in such cases. Objective attribution examines whether the harmful outcome can be objectively imputed to the actor's conduct and thereby assists in defining the boundaries within which the consequences produced by artificial intelligence models may be attributed to a human actor.
This approach is particularly employed in cases assessed under negligence and in forms of liability arising from omission, as it serves to evaluate whether the software developer, manufacturer, or user has taken adequate precautions and to delineate the boundaries of foreseeable risks. In situations where the technical autonomy of artificial intelligence weakens the causal link, this framework enables a discussion of whether the outcome can still be normatively attributed to a human actor.
In conclusion, artificial intelligence is not yet recognized as a perpetrator under criminal law; however, in cases where these systems cause harm, liability is assessed based on human duties of supervision, prevention, and oversight.
4. Liability of Artificial Intelligence in Comparative Law
4.1 Regulations of the European Union
The European Union approaches the liability regime for artificial intelligence within a holistic structure that differentiates responsibility according to (i) the level of autonomy of the system,
(ii) the applicable risk category, and (iii) the degree of human intervention involved.
The 2019 "Report on Liability for Artificial Intelligence and Other Emerging Digital Technologies" issued by the European Commission8 emphasized that traditional fault-based principles alone are insufficient to ensure fair compensation for AI-related harm. It underscored the need for a combined application of fault-based and strict liability principles. The report also stated that granting legal personality to artificial intelligence is unnecessary and that liability should be allocated among those who control the relevant risks and derive economic benefit from the operation of such systems.
The 2021 Artificial Intelligence Act classifies artificial intelligence models into "unacceptable," "high," "low or minimal" risk categories and imposes obligations on high-risk systems, including human oversight, record-keeping, data quality requirements, algorithmic transparency, and safety monitoring.9 This regulatory framework places explicit accountability on both the providers that place AI systems on the market and the enterprises that deploy them.
The 2025 European Commission proposal for a Digital Omnibus on AI does not establish a new liability regime, but it reinforces the actor-based framework already embedded in the AI Act. Under this structure, responsibility is functionally distributed among developers,
providers, distributors, and users, creating a practical "chain of responsibility."10 The EU maintains its position that AI systems have no independent legal personality; accountability remains with the human or corporate actors who design, operate, or deploy such systems.
4.2 United States
In the United States, liability for artificial intelligence is regulated differently across the states. The first legislative step was taken with the autonomous vehicle law enacted in 2011 in Nevada. This framework recognizes manufacturer liability only for defects arising during the production phase and excludes damages resulting from modifications made by third parties.11
The Algorithmic Accountability Act imposes obligations on organizations that use high-risk automated decision-making AI models, requiring them to conduct impact assessments and undergo independent audits.12 The bill does not grant individuals a direct right of action; instead, violations of these obligations are treated as "deceptive practices" subject to oversight by the Federal Trade Commission.13
The U.S. approach, unlike the public-law-oriented liability framework in Europe, is structured around market supervision. Liability is constrained by criteria such as the technical conformity of the system, data integrity, and institutional transparency; within this framework, responsibility is directed not toward artificial intelligence itself but toward the enterprises that develop or operate the system.14
4.3 Other Regulatory Approaches and Practices
In South Korea, the 2020 Autonomous (Driverless) Vehicles Act15 provides that damages caused by autonomous vehicles are assessed under the strict liability of the vehicle owner. This framework is based on the principle that risk and benefit should be allocated among human actors, even when no direct human fault is present in accidents occurring in driverless mode.
Similar to the risk liability approach under Article 71 of the Turkish Code of Obligations, liability for harm is imposed on the operator and the manufacturer, irrespective of fault.16
In Singapore, the national AI governance framework updated in 2022 imposes duties of care and oversight on developers and operators with respect to harm that artificial intelligence systems may cause, while maintaining the requirement of human supervision in high-risk domains.17
In China, the regulations on algorithmic decision-making models that entered into force in 2022 assign liability for harm arising from automated decision processes to the organization operating the models. These rules impose obligations on platforms to ensure transparency, implement appropriate security measures, and maintain the ability to intervene in and control algorithmic outputs.18
These examples demonstrate that, across different legal systems, artificial intelligence is still assessed within the framework of human-centered liability principles, and that effective legal protection can be provided without granting AI an independent legal personality.
5. Conclusion and Evaluations
Although artificial intelligence technologies have the potential to transform established legal concepts, the prevailing trend in current regulatory frameworks is to preserve a human-centered liability approach. In general, foreign legal systems have not recognized artificial intelligence as an independent legal person; instead, liability is assigned to the human actors who develop, operate, or benefit from such systems.
In Turkish law, similarly, artificial intelligence is not recognized as a direct tortfeasor, and liability is assessed on the basis of human supervision and control obligations within the framework of tort and criminal law principles. Although there is no explicit statutory regulation, the general provisions—particularly Articles 66 and 71 of the Turkish Code of Obligations— currently provide a sufficiently workable basis for evaluating such liability.
Therefore, discussions regarding the recognition of artificial intelligence as a legal entity have yet to find a concrete response in either international or national law. The European Parliament's 2017 proposal on "electronic personality" was quickly abandoned, demonstrating that artificial intelligence does not approach the status of a natural or legal person; given its current technical capacity, it can only be considered an extension of human will. The structure of the person-property distinction in the Turkish legal system also reveals that granting personality to artificial intelligence would create unforeseeable consequences in the areas of representation, property, liability, and criminal law.
In conclusion, as in international regulatory frameworks, artificial intelligence is regarded in Turkish law not as an independent legal subject but as a tool operating within the sphere of human activity. For the purposes of legal and criminal liability, the decisive criterion is not the system's level of autonomy, but the degree of control exercised by the person who operates or benefits from it. This approach demonstrates that human intent remains central in the assessment of AI-driven acts and shows that existing liability mechanisms, when adapted to contemporary technological contexts, may provide effective legal protection.
However, given the increasing autonomy of artificial intelligence and its complex decision- making processes, the fact that a liability framework specific to these technologies has not yet been fully established points to a significant gap. Therefore, in terms of both foreseeability and legal certainty, it has become imperative to re-examine the liability regime for artificial intelligence in a manner that is clear, consistent, and technology-compatible.
Footnotes
1 Berk Kapancı, "Özel Hukuk Perspektifinden Bir Değerlendirme: Yapay Zekâ ve Haksız Fiil Sorumluluğu", in Yapay Zekâ ve Hukuk , 2021, pp. 158.
2 Osman Gazi Güçlütürk ve Tuğçe Kadıoğlu, "Yapay Zekânın Hukukî ve Cezaî Sorumluluğu," Yapay Zekâ ve Hukuk, 2021 pp. 53–54.
3 Berk Kapancı, "Özel Hukuk Perspektifinden Bir Değerlendirme: Yapay Zekâ ve Haksız Fiil Sorumluluğu," Yapay Zekâ ve Hukuk, 2021 pp. 158–170.
4 Serkan Seyhan, Yapay Zekâ Teknolojileri Kapsamında İdarenin Sorumluluğu 2023 pp. 84–90.
5 Yiğitcan Çankaya, Yapay Zekânın İş İlişkisine Etkileri 2024 pp. 77–80.
6 Deibel, Talya. "Back to (for) the Future: AI and the Dualism of Persona and Res in Roman Law." European Journal of Law and Technology 12, no. 2 (2021).
7 European Parliament, Committee on Legal Affairs, Report with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) (A8-0005/2017, 27 January 2017).
8 European Commission, Report on Liability for Artificial Intelligence and Other Emerging Digital Technologies (Brussels, 2019), 14–18
9 European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM(2021) 206 final (Brussels, 2021), arts. 5–10.
10 European Commission, Proposal for a Digital Omnibus on AI, (Brussels, 2025)
11 Nevada Revised Statutes §482A.030 (Autonomous Vehicles Law, enacted June 17, 2011); Bryant Walker Smith, "Automated Driving and Product Liability," Michigan State Law Review 1 (2017): 15–20.
12 Algorithmic Accountability Act of 2022, H.R. 6580, 117th Congress (2022).
13 Federal Trade Commission, "FTC Policy Statement on Enforcement Related to Automated Decision Systems," Washington, D.C., April 2021.
14 Ryan Calo, "Artificial Intelligence Policy: A Primer and Roadmap," UC Davis Law Review 51, no. 2 (2017): 405–412; Andrew D. Selbst, "An Institutional View of Algorithmic Impact Assessments," Harvard Journal of Law & Technology35, no. 2 (2022): 563–570.
15 Autonomous Vehicles Act, Act No. 17313, Republic of Korea, July 28, 2020
16 Jeong-Ho Kim, "Legal Liability for Autonomous Vehicles in Korea," Korean Journal of Law and Technology 15, no. 1 (2021): 92–95; OECD, Responsible Innovation in Autonomous Mobility: Global Policy Perspectives (Paris: OECD, 2022), 52–55.
17 Personal Data Protection Commission (Singapore), Model AI Governance Framework, 2nd ed., 2022.
18 Cyberspace Administration of China, Provisions on the Administration of Algorithmic Recommendation in Internet Information Services, effective 1 March 2022.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.