- within Transport, Real Estate and Construction and Intellectual Property topic(s)
- in Turkey
Four major data protection authorities have published guidance on Agentic AI systems: KVKK (12 March 2026 - Guidance), ICO (8 January 2026 - Report), EDPS (November 2025 - Report) and AEPD (18 February 2026 - Guidance). This information note consolidates these documents within a comprehensive framework, presenting each authority's assessment elements individually and offering integrated recommendations for companies.
A. KVKK Assessment (Türkiye)
KVKK's "Agentic AI" guidance defines Agentic AI as an approach characterised by more pronounced goal-orientation, autonomy, and environmental interaction compared to traditional artificial intelligence. System operations are carried out through AI Agents, and Multi-agent Systems may be activated when the scope of tasks expands.
Principal Risks Identified by KVKK
- Purpose Limitation and Data Minimisation: The inclusion of initially unforeseen datasets, the re-use of existing data across different tasks, and the variability of data processing scope over time constitute significant risks.
- Legal Basis Assessment: In multi-step processing structures, the need may arise for continuous reassessment of the validity of the initially determined legal basis.
- Inferred and Derived Data: There exists a risk that sensitive information may be indirectly revealed through the correlation of data obtained from different sources, and that anonymised data may become re-identifiable.
- Transparency and Explainability: Various difficulties may be encountered in tracing decision chains across multi-component structures, and the "black box" characteristic may acquire a more complex form.
- Accountability: The determination of role and responsibility allocation among developers, deployers, and other actors may become more challenging.
- Data Accuracy: Erroneous information originating from hallucinations may propagate in a cascading manner, and errors at early stages may be carried forward to subsequent steps.
- Security and System Resilience: Risks relating to the expansion of the attack surface and input manipulation may emerge.
Principal Measures Recommended by KVKK
- Human Oversight: It is recommended that meaningful and adequate human participation be ensured across the development, deployment, and post-deployment phases, and that an appropriate balance between autonomy levels and human oversight be established.
- Privacy by Design: The adoption of privacy by design and privacy by default approaches, together with the integration of privacy-enhancing technologies (PETs), is recommended.
- Risk Assessment: The systematic assessment of risks that may arise throughout the lifecycle of Agentic AI systems, and the conduct of Data Protection Impact Assessments (DPIAs) where appropriate, are recommended.
- Governance and Training: The updating of existing data protection and governance mechanisms, and the conduct of awareness-raising and training activities for relevant personnel, are considered important.
B. ICO Assessment (United Kingdom)
The ICO's 68-page "Tech Futures: Agentic AI" report (see also the ICO announcement page), while not constituting formal guidance, clearly sets out the regulator's priority areas. The ICO defines agentic AI as systems where large language models are integrated with tools, memory, and adaptive decision-making to complete open-ended tasks with limited human direction. The ICO continues to monitor agentic AI through its AI and Data Protection Guidance (March 2023, under review) and AI and Biometrics Strategy (June 2025).
Key Issues Highlighted by the ICO
- Controllership Chain: Determining controller and processor roles across the agentic AI supply chain, particularly in multi-vendor ecosystems, may become more challenging. The ICO emphasises that AI agents lack legal personality and that organisations remain responsible for compliance.
- Scaled Automation and ADM: The rapid automation of complex tasks may be of a nature to trigger obligations under UK GDPR Article 22.
- Special Category Data Inference: There exists a risk that AI agents may incidentally process sensitive data through inference, a situation which requires both an Article 6 lawful basis and supplementary Article 9 conditions.
- Scenario Analysis: The ICO assesses potential development paths and risk profiles of agentic AI through four different future scenarios.
- Innovation Opportunities: Positive use cases, including data protection compliant agents, privacy management agents, and information governance agents, are also addressed in the report.
ICO Action Points for Organisations
- Controller and processor roles should be contractually determined at every layer of the agentic chain.
- DPIA templates should be updated to cover increased autonomy levels and special category data inference.
- Information verification mechanisms should be established at critical decision points.
- Technical measures should be implemented to prevent the incidental processing of special category data.
C. EDPS Assessment (European Union)
In its TechSonar 2025-2026 report, the EDPS identifies agentic AI as one of six key emerging technology trends, noting that humans will increasingly assume the role of "shepherds of AI agents." The full text of the report is available here.
Key Issues Highlighted by the EDPS
- Preservation of Human Autonomy: The EDPS emphasises that the increase in AI autonomy must not diminish individuals' capacity to make independent choices, exercise control over their actions, and remain accountable for their decisions.
- Persistent Memory Risk: The memory structure that persists across and beyond tasks increases the risk of unexpected retention and use of personal data.
- EU AI Act Connection: In its capacity as competent authority under the EU AI Act, the EDPS assesses agentic AI from both data protection and AI regulation perspectives.
- Flexible Governance Framework: In light of the rapid development of agentic AI and the difficulty of predicting future capabilities, the establishment of flexible and adaptable governance frameworks is considered necessary.
D. AEPD Assessment (Spain)
The AEPD's 71-page "Agentic AI from the Perspective of Data Protection" guidance stands out as one of the most comprehensive documents published by a data protection authority on the subject of agentic AI. The guidance explains through concrete scenarios how existing GDPR obligations apply to agentic AI deployments.
Key Issues Highlighted by the AEPD
- Four Vulnerability Categories: The AEPD classifies vulnerabilities specific to agentic AI as follows: (i) Environmental interaction - access to internal and external data sources; (ii) Service integration - multi-service connectivity; (iii) Memory structure - working memory and management memory; (iv) Autonomy - non-repeatable behaviour.
- BYOAgentic Risk: The creation of agents by users without adequate oversight mechanisms is assessed as the agentic version of shadow AI use.
- "Rule of 2" Risk Analysis: The AEPD proposes a three-element analysis for automated decision-making risk: uncontrolled information processing, access to sensitive data, and authority to act externally. Where two of these elements are present simultaneously, a high-risk assessment is to be conducted.
- Threat Classification: Threats are addressed in two main groups: those arising from authorised processing (objective misalignment, shadow-leak, automation bias, and user profiling) and those arising from unauthorised processing (prompt injection, memory poisoning, zero-click attacks, and data exfiltration).
- Illusion of Reliability: The reliance on agents that appear efficient and consistent but whose behaviour is insufficiently understood is assessed as a significant risk.
Practical Measures Recommended by the AEPD
- Appropriate governance frameworks and corporate policies are required to be established.
- The implementation of evidence-based assessment and contractual control mechanisms is recommended.
- The establishment of explainability mechanisms and the provision of memory control through compartmentalisation, protection, and lifecycle management are advised.
- The application of the data minimisation principle through access policies, filtering, and pseudonymisation is recommended.
E. Consolidated Recommendations for Companies
In light of all authorities' assessments, companies are recommended to evaluate the following measures:
Governance and Accountability
- Controller and processor roles should be clearly determined across the agentic AI supply chain (ICO, AEPD).
- Responsibilities among developers, deployers, and other actors should be contractually defined (KVKK, AEPD).
- Corporate policies should be established to address BYOAgentic (shadow agentic AI) risk (AEPD).
Human Oversight
- Meaningful human oversight should be ensured across the development, deployment, and post-deployment phases in their entirety (KVKK).
- An appropriate balance between autonomy levels and human oversight should be established (KVKK, EDPS).
- Information verification mechanisms should be established at critical decision points (ICO).
Transparency and Explainability
- Interactions between system components should be made traceable and protective control mechanisms should be established (KVKK, ICO).
- Transparency notices should be adapted to agentic AI use cases (ICO, AEPD).
Data Protection Impact Assessment
- DPIA processes should be updated to cover increased autonomy levels, multi-agent architectures, and special category data inference (ICO, KVKK).
- Automated decision-making risk should be analysed using the AEPD's "Rule of 2" methodology (AEPD).
Security and Memory Management
- Technical measures should be implemented to prevent the incidental processing of special category data (ICO).
- Personal data within agent memory structures should be mapped, with due regard to the distinction between working and management memory (AEPD, EDPS).
- Defence mechanisms should be established against threats such as prompt injection and memory poisoning (AEPD).
Privacy and Design
- Privacy by design and privacy by default principles should be integrated into systems, and privacy-enhancing technologies (PETs) should be utilised (KVKK, AEPD).
Training and Regulatory Monitoring
- Training activities should be conducted for relevant personnel on agentic AI and personal data protection (KVKK).
- International regulatory developments and consultation processes should be closely monitored (ICO, AEPD).
F. Conclusion
The assessments of KVKK, ICO, EDPS, and AEPD deliver a shared core message: the structural complexity and increasing autonomy of agentic AI systems do not negate personal data protection obligations; on the contrary, they render the effective fulfilment of these obligations more complex and critical.
It is of considerable importance for companies to approach agentic AI not merely as a technological efficiency tool, but also from the perspective of corporate risk management and personal data protection. Preventive and guidance-oriented measures taken in this direction will contribute to the safe utilisation of the opportunities offered by agentic AI and will significantly reduce the likelihood of companies encountering potential legal and reputational risks.
Kaynakça / Sources
- KVKK: "Etken Yapay Zeka (Agentic AI)", Şubat 2026 [Erişim / Access]
- ICO: "Tech Futures: Agentic AI" Rapor (PDF), 8 Ocak 2026 [Erişim / Access]
- ICO: "Tech Futures: Agentic AI" Duyuru Sayfası, 8 Ocak 2026 [Erişim / Access]
- ICO: Guidance on AI and Data Protection (Mart 2023, güncelleme altında) [Erişim / Access]
- ICO: AI and Biometrics Strategy, Haziran 2025 [Erişim / Access]
- ICO: AI and Biometrics Strategy - Plan of Action 2025/26 [Erişim / Access]
- ICO: Artificial Intelligence - Ana Sayfa [Erişim / Access]
- EDPS: TechSonar 2025-2026: Agentic AI [Erişim / Access]
- EDPS: TechSonar 2025-2026 Tam Rapor (PDF), Kasım 2025 [Erişim / Access]
- AEPD: "Agentic AI from the Perspective of Data Protection" (PDF), 18 Şubat 2026 [Erişim / Access]
- CIPL: "Agentic AI: Fostering Responsible and Beneficial Development", Kasım 2025 [Erişim / Access]
- WEF: "AI Agents in Action: Foundations for Evaluation and Governance", 2025 [Erişim / Access]
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.