The KNLTB Case: Legitimate Interests & AI

The recent ruling by the Court of Justice of the European Union (CJEU) in Koninklijke Nederlandse Lawn Tennisbond v Autoriteit Persoonsgegevens (Case C-621/22)...
Malta Privacy

Legitimate Interests

The recent ruling by the Court of Justice of the European Union (CJEU) in Koninklijke Nederlandse Lawn Tennisbond v Autoriteit Persoonsgegevens (Case C-621/22) expanded on the adequacy of using legitimate interest and expanded on the EDPB Guidelines on "legitimate interests," by infusing valuable nuances and clarifications, particularly in the context of commercial interests and reasonable expectations.

While the case itself revolved around a seemingly straightforward issue related to a sports association sharing member data with sponsors, the CJEU's pronouncements on the GDPR's "legitimate interests" provision have far reaching consequences, particularly for the rapidly evolving field of artificial intelligence.

AI, with its insatiable appetite for data, often finds itself in a tricky position when it comes to GDPR compliance. Machine learning models, in particular, thrive on vast datasets, and these datasets frequently contain personal information. This raises a critical question: How can AI developers navigate the GDPR's complexities while ensuring their innovative technologies respect individuals' fundamental rights? The KNLTB case offers valuable guidance, shedding light on how "legitimate interests" provision (Article 6(1)(f)) might also be applied within the AI context.

The GDPR demands that all processing of personal data be lawful. Article 6(1) outlines various legal bases for processing, with consent being the gold standard. However, consent isn't always feasible or appropriate. In such cases, Article 6(1)(f) offers an alternative: processing can be lawful if it's "necessary for the purposes of the legitimate interests pursued by the controller or by a third party," as long as those interests don't outweigh the rights and freedoms of the individuals whose data is being processed.

The CJEU's ruling in Koninklijke highlighted that the GDPR does not strictly define "legitimate interests". It affirmed that legitimate interests can encompass a broad range of commercial purposes, provided they are lawful and proportionate, provided they rigorously adhere to GDPR safeguards and balancing tests and a rigorous three-part test must be satisfied:

  1. Is the interest lawful? It must align with the GDPR's overarching goals of protecting personal data.
  2. Is the processing truly necessary? There must be no less intrusive way to achieve the desired outcome.
  3. Have the interests been balanced? The controller's interests cannot override the fundamental rights and freedoms of the data subjects, especially if those individuals wouldn't reasonably expect their data to be used in that way.

This three part test, as articulated in KNLTB, has ramifications for how AI developers can use personal data for training their models.

Let's delve into how the KNLTB ruling might specifically impact the world of AI:

Lawful Purpose: The CJEU acknowledged that commercial goals, can be considered legitimate interests. This offers some comfort to AI developers who rely on personal data to train their models, given that enhancing AI performance or expanding business opportunities could potentially fall within the same ambit. However, as I remarked above, the ruling also serves as a cautionary dicta, not all commercial interests automatically pass muster. AI companies need to go further, demonstrating that their interests are genuinely lawful and aligned with the GDPR's spirit and assessment, mentioned above.

Moreover, transparency is key. AI developers must be upfront with individuals about how and why their data might be used. This echoes guidance from DPAs like France's CNIL, which has consistently emphasized the importance of transparency and proportionality in AI systems, particularly those deemed "high-risk."

Necessity: The "strict necessity" requirement highlighted in KNLTB when applied in the AI sector, poses a significant challenge for AI developers. They must convincingly demonstrate that using personal data is absolutely essential for the AI's purpose and that alternatives, such as synthetic data or anonymization techniques, simply won't cut it. This is especially crucial when dealing with massive datasets where certain data points could potentially be excluded or anonymised without compromising the AI's functionality.

The principle of data minimisation here also comes into play and AI companies must only use the bare minimum of data necessary to achieve their objectives, prioritizing anonymisation and synthetic data wherever possible. If AI developers believe personal data is truly irreplaceable, they need to meticulously document their reasoning, explaining why less invasive methods were deemed unsuitable.

Balancing Interests: Perhaps the most delicate aspect of the "legitimate interests" test is the balancing act. AI developers here will be walking a tight-rope and must carefully weigh their own interests against the rights and freedoms of the individuals whose data they're using. Crucially, the KNLTB ruling emphasised the importance of "reasonable expectations." If individuals wouldn't reasonably expect their data to be used for a particular purpose, their rights may well trump the controller's interests.

This has significant ramifications for AI applications that rely on scraping data from publicly available sources like websites and social media. Just because information is publicly accessible doesn't automatically mean individuals expect it to be fed into AI training models, especially without their knowledge or consent.

Accountability: The KNLTB judgment reinforces the GDPR's emphasis on accountability. It's not enough for AI companies to simply assume they're complying with Article 6(1)(f). They need to meticulously document their decision-making process, demonstrating how they've met each of the three criteria, particularly necessity, proportionality, and the balancing of interests.

This also aligns with the EDPB guidelines and recommendations from DPAs, including the CNIL which encourages AI companies to conduct Data Protection Impact Assessments (DPIAs) for high-risk processing activities. DPIAs are a valuable tool for identifying, assessing, and mitigating privacy risks, especially when dealing with data from individuals who might not anticipate their information being used in this way. DPIAs also serve as a proactive compliance measure, showcasing an organization's commitment to data protection.

The KNLTB ruling, coupled with evolving DPA guidance, has raised the bar for lawful data processing under the "legitimate interests" provision. It's also worth noticing that in cases where sensitive data is in play Article 9 of the GDPR will kick in and Article 9(2) provides the specific conditions under which the sensitive data needs to be processed. The recent ruling of the CJEU in Meta v. Bundeskartellamt, casts a much wider applicability of Article 9 as the CJEU decided that information does not need to directly refer to protected categories under the said Article but it suffices that data processing allows information falling within one of those categories to be revealed.

For developers and AI companies, this means greater scrutiny, more rigorous justification, and a heightened focus on transparency and accountability to process personal data.

References:

1. CJEU, Koninklijke Nederlandse Lawn Tennisbond v Autoriteit Persoonsgegevens, C-621/22, Judgment of 4 October 2024.

2. CJEU, C‑252/21, Meta vs. Bundeskartellamt.

3. CNIL, AI Action Plan 2023.

4. CNIL, Recommendations on the development of AI Sytems, 2024.

5. European Data Protection Board. Guidelines on the legitimate interests legal basis for processing personal data 2024.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More