ARTICLE
15 April 2026

Turkish DPA Publishes New Guidance On “Agentic AI”

HB
Herguner Bilgen Ucer Attorney Partnership

Contributor

Hergüner Bilgen Üçer is one of Türkiye’s largest, full-service independent corporate law firms representing major corporations and clientele, and international financial institutions and agencies. Hergüner not only provides expert legal counsel to clients, but also serves as a trusted advisor and provides premium legal advice within a commercial context.
The Turkish Personal Data Protection Authority (the “DPA”) has published a document titled “Agentic Artificial Intelligence (Agentic AI)” (the “Guidance”) on its website, setting out its assessment of the technical features of agentic AI systems...
Turkey Technology
Herguner Bilgen Ucer Attorney Partnership are most popular:
  • within Technology, Intellectual Property and Energy and Natural Resources topic(s)

The Turkish Personal Data Protection Authority (the “DPA”) has published a document titled “Agentic Artificial Intelligence (Agentic AI)” (the “Guidance”) on its website, setting out its assessment of the technical features of agentic AI systems, their potential use cases, and the key risks these systems may raise from a personal data protection perspective.

The Guidance is noteworthy as it provides insight into how the DPA approaches AI systems with higher levels of autonomy from a data protection perspective.

Agentic AI Systems: Goal-Oriented and Autonomous Architectures

The Guidance describes agentic AI systems as integrated architectures capable of assessing environmental conditions, adapting to changing circumstances, and initiating actions with varying degrees of autonomy in order to achieve specific objectives. In this context, the DPA emphasizes that such systems should not be viewed merely as tools that respond to user prompts or generate content; rather, they should be understood as systems capable of executing multi-step processes, coordinating different data sources and tools, and expanding the scope of data processing throughout their operation.

Examples of such systems include AI agents that can conduct online research, analyze sources, and prepare reports based on a user-defined objective, as well as digital assistants capable of analyzing customer requests and automatically initiating actions within relevant systems.

Increased Risks Related to Inferred and Derived Data

One of the key issues highlighted in the Guidance is the capacity of agentic AI systems to generate new assessments, profiles, or predictions about individuals by combining data from multiple sources. The DPA notes that data elements that may initially appear limited in scope can, when linked with other datasets, lead to more comprehensive and, in some cases, more sensitive outcomes. This approach suggests that, in assessing agentic AI systems, the relevant question is not only whether personal data is directly input into the system, but also whether the system is capable of generating new layers of personal data by correlating different datasets.

Transparency, Explainability and Accountability

Another issue highlighted by the DPA concerns the increasing complexity of data processing activities and decision-making mechanisms in systems where multiple AI agents operate in coordination. The Guidance notes that it may not always be possible to clearly determine which data is accessed at which stage, which component generated a particular decision, or at what point human intervention may be possible. In this context, it is also emphasized that the allocation of roles and responsibilities among developers, providers, and organizations using the system may become blurred. Accordingly, the Guidance suggests that agentic AI systems should be assessed not only in terms of their technical performance but also within a more robust governance and accountability framework.

Data Accuracy and Output Reliability

The Guidance also notes that, in systems incorporating generative AI components, the possibility of producing inaccurate, incomplete, or misleading outputs may pose risks with respect to the accuracy of personal data. In particular, where AI-generated outputs are integrated into internal systems, used in assessments relating to individuals, or relied upon in decision-making processes, the generation of erroneous data may have direct data protection implications. In this regard, the DPA approaches data accuracy not merely as a technical quality issue, but as a matter directly linked to the lawfulness of data processing activities.

Risk-Based Governance, Data Security and Human Oversight

The Guidance also highlights that the ability of agentic AI systems to operate through integration with multiple data sources, tools and digital environments may render traditional data security risks more complex. In this context, it is emphasized that the risk landscape is not limited to unauthorized access or data breaches; the potential manipulation of system behavior, the unintended disclosure of sensitive information, and the consequences arising from autonomous operational capabilities should also be carefully assessed. In parallel, the DPA identifies a human-centric approach and meaningful human oversight as key elements of governance in the context of agentic AI. In particular, it stresses the importance of clearly defining in advance which data such systems may access, what types of actions they may perform, and at which stages human intervention should be possible. The DPA also underlines that a risk-based approach, together with the principles of privacy by design and privacy by default, should be incorporated into the management of agentic AI systems.

Implementation and Compliance Perspective

The framework outlined by the DPA suggests that, for companies using or preparing to deploy agentic AI solutions, such systems should be regarded not merely as tools for innovation and efficiency, but also as a distinct compliance area requiring consideration in terms of data inventories, legal basis assessments, access controls, logging and monitoring mechanisms, vendor relationships, human oversight, and internal governance processes. In this respect, the Guidance serves as an important reference, particularly for AI use cases involving multiple data sources or systems capable of supporting decision-making or performing autonomous actions, indicating that existing data protection frameworks may need to be reassessed throughout the lifecycle of such systems.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More