ARTICLE
17 March 2026

AEPD Publishes Guidance On Agentic Artificial Intelligence From A Data Protection Perspective

AO
A&O Shearman

Contributor

A&O Shearman was formed in 2024 via the merger of two historic firms, Allen & Overy and Shearman & Sterling. With nearly 4,000 lawyers globally, we are equally fluent in English law, U.S. law and the laws of the world’s most dynamic markets. This combination creates a new kind of law firm, one built to achieve unparalleled outcomes for our clients on their most complex, multijurisdictional matters – everywhere in the world. A firm that advises at the forefront of the forces changing the current of global business and that is unrivalled in its global strength. Our clients benefit from the collective experience of teams who work with many of the world’s most influential companies and institutions, and have a history of precedent-setting innovations. Together our lawyers advise more than a third of NYSE-listed businesses, a fifth of the NASDAQ and a notable proportion of the London Stock Exchange, the Euronext, Euronext Paris and the Tokyo and Hong Kong Stock Exchanges.
On February 18 2026, the Spanish supervisory authority (the AEPD) published guidance on Agentic Artificial Intelligence systems (the Guidance).
United States Privacy
A&O Shearman are most popular:
  • within Law Department Performance, Insolvency/Bankruptcy/Re-Structuring and Criminal Law topic(s)

On February 18 2026, the Spanish supervisory authority (the AEPD) published guidance on Agentic Artificial Intelligence systems (the Guidance).

Amongst other things, the Guidance addresses the following:

Regulatory compliance

The Guidance notes that determination of regulatory responsibilities becomes more complex when an AI agent acts autonomously. The roles of controller and processor require specific analysis, especially when agentic AI systems access third-party services or when the AI agent itself is a service provided by another entity. The controller must design and document data flows, identifying for each system the third parties involved and their data protection role.

The Guidance addresses transparency, legal basis, records of processing activities and the effective exercise of data subject rights, automated decision-making, data protection impact assessment, data protection by design and default and international transfers.

The Guidance suggests that controllers should apply the "Rule of 2", which establishes that a system should never combine the following three risk factors simultaneously: (i) processing uncontrolled input, (ii) accessing sensitive information, and (iii) performing autonomous actions.

Specific vulnerabilities

Agentic AI systems present vulnerabilities derived from their own characteristics and the Guidance identifies four categories:

  • Agentic AI interaction with the environment means the agentic AI may access both internal and external data sources, increasing the risk of unauthorised data access or disclosure;
  • Agentic AI service integration capabilities allow AI agents to integrate multiple services, and the ease of uncontrolled deployment creates a "BYOAgentic" (Build Your Own Agentic) problem where users may deploy AI agents without adequate oversight;
  • Agentic AI systems have both working memory (for task execution) and management memory (for long-term learning), both of which may contain personal data and require appropriate controls; and
  • AI agent autonomy creates challenges for transparency, task planning, and results in non-repeatable behaviour that complicates auditing and verification.

Specific threats

The Guidance distinguishes between threats arising from authorised processing (e.g. lack of governance and policies, lack of development maturity, objective misalignment, feedback loops and bubble effects, shadow-leak (silent exfiltration of personal data), automation bias (excessive trust in agent outputs) and user profiling), and unauthorised processing (prompt injection (direct and indirect), memory and RAG poisoning, zero-click attacks, data exfiltration through URL parameters, session hijacking, ransomware attacks, and unauthorized access to agentic memory).

Recommendations

The Guidance proposes detailed measures that controllers and processors may adopt, including establishing an appropriate governance framework, evidence-based assessment, contractual controls, and explainability mechanisms. The data minimisation principle should be applied through access policies, data cataloguing, filtering, preventing shadow leaks, and pseudonymisation, while memory control should be implemented through compartmentalisation, retention periods, and sanitisation.

The Guidance makes further recommendations regarding automation controls and AI agent oversight and emphasises the importance of appropriate consent management mechanisms, enhanced transparency measures, and literacy and training programs at different organisational levels, including management, IT, and end users.

The press release is available here, and the Guidance is available here in Spanish and here in English.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More