ARTICLE
8 August 2025

AI And GDPR: A Road Map To Compliance By Design - Episode 5: Using AI

W
WilmerHale

Contributor

WilmerHale provides legal representation across a comprehensive range of practice areas critical to the success of its clients. With a staunch commitment to public service, the firm is a leader in pro bono representation. WilmerHale is 1,000 lawyers strong with 12 offices in the United States, Europe and Asia.
The rise of artificial intelligence (AI) and its widespread availability offers significant growth opportunities for businesses. However, it necessitates a robust governance...
United States Technology

The rise of artificial intelligence (AI) and its widespread availability offers significant growth opportunities for businesses. However, it necessitates a robust governance framework to ensure compliance with regulatory requirements, especially under the  European Union's (EU) Artificial Intelligence Act (AI Act) (see our  Guide to the AI Act) and the  EU General Data Protection Regulation (GDPR). The reason GDPR compliance is so important is that (personal) data is a key pillar of AI. For AI to function effectively, it requires good quality and abundant data so that it can be trained to identify patterns and relationships. Additional personal data is often gathered during deployment and incorporated into AI to assist with individual decision-making.

In this series of five blog posts, we discuss GDPR compliance throughout the AI development life cycle and when using AI.

This is our fifth and final episode. The  first second third, and  fourth episodes are available on the  WilmerHale Privacy and Cybersecurity Law Blog.

Data Protection by Design

GDPR compliance plays a key role throughout the AI development life cycle, starting from the very first stages. This reflects one of the key requirements and guiding principles of the GDPR called data protection by design (Article 25 GDPR). Businesses are required to implement appropriate technical and organizational measures, such as pseudonymization, at both the determination stage of processing methods and during the processing itself. These measures should aim to implement data protection principles, such as data minimization, and integrate necessary safeguards into the processing to ensure GDPR compliance and protect individuals' data protection rights.

GDPR Compliance When Using AI

The GDPR is applicable not only to companies that are developing AI but also to those using it. This episode focuses on organizations acting as controllers under the GDPR, meaning entities that determine the purposes and means of processing personal data. For instance, companies are considered controllers when they employ an AI large language model to analyze employee records or generate work products that include personal data.

Joint Controllership or Controller-Processor Arrangement

If the AI developer and the company using its AI solution collaboratively determine the purposes and methods of processing personal data in connection with such a solution, they are considered joint controllers. Consequently, they must enter into an agreement to establish their respective obligations under the GDPR.

If, however, the AI developer acts as a processor, meaning it processes personal data on behalf of the company using AI (the controller), they are required to enter into a controller-processor agreement. This agreement must outline the subject matter and duration of the processing, the nature and purpose of the processing, the types of personal data involved, the categories of individuals concerned, and the obligations and rights of the parties.

  • Joint Controllership Arrangement. The AI developer will rarely be a joint controller because it will rarely jointly  determine the purposes of processing operations with companies using AI.
  • Controller-Processor Agreement. AI developers typically handle personal data for companies utilizing their AI solutions. This is generally applicable to all software-as-a-service offerings. Companies using such AI solutions must have a controller-processor agreement in place and verify that the processor guarantees the implementation of appropriate technical and organizational measures to ensure GDPR compliance. This especially includes measures to ensure an appropriate level of security and compliance with the GDPR requirements for transfers of personal data outside the EU.

Data Input

Companies using AI must ensure that they comply with the GDPR when inputting personal data into AI systems. Key points for consideration are as follows:

  • Companies using AI must prioritize internal awareness and provide comprehensive training to all relevant staff. Compliance with GDPR cannot be achieved without adequately trained employees.
  • Purpose limitation. Companies using AI should clearly define the purposes for which personal data is processed. AI should not be used for purposes not permitted by the company's policy.
  • Data minimization.  Companies should limit the amount of personal data included in AI systems. It is best to provide anonymous data as input unless personal data is necessary, in which case it should be limited to what is indeed necessary. Companies need to exercise caution with freely accessible large language models, as any information provided to such systems will generally be shared with the developer of that system. Companies may therefore consider prohibiting the use of these tools or ensuring that no personal data is input into them.

Data Output

Companies using AI also need to ensure that the system's output complies with the GDPR.

  • It is essential for companies using AI to verify that any personal data generated as output, or any personal data provided by the company based on such output, is accurate. As AI-generated data outputs might not always be accurate, reviewing and addressing potential inaccuracies is crucial. This process also helps mitigate any possible biases in AI systems.
  • Companies must be transparent about their use of AI and inform third parties, such as their customers, about how AI is used and the purposes for its application. Specifically, companies that use AI for automated individual decision-making must provide individuals with relevant information in a concise, transparent, intelligible, and easily accessible form regarding the procedure and principles applied to use personal data to obtain a specific result (see  episode 2 for more details).
  • Individuals' Rights.  Companies using AI should ensure they respect individuals' rights regarding the processing of their personal data for AI purposes. This involves enabling individuals to access their personal data, correct any inaccuracies, object to the processing, or have their data erased under applicable legal conditions (see  episode 3 for more details).
  • Companies using AI must ensure that AI systems are safe before use (see above, Controller-Processor Agreement). They must also have processes to notify relevant authorities and affected individuals when necessary (see episodes  3 and  4 for more details). For instance, if chat logs of an AI chatbot used for customer support containing personal data become publicly accessible due to a misconfiguration or an attack, the company would need to notify both the competent authority and the individuals concerned.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More