1437914a.jpg

As envisaged in our predictions for 2024, close regulatory scrutiny of adtech looks unlikely to wane in 2024. 2023 saw multiple CJEU rulings resulting in Meta relying on three different lawful bases in quick succession when processing its users' personal data for targeted advertising purposes. The year ended with Meta relying on consent and proposing a subscription model for advertising-free services in the EU. 2024 has now seen the Dutch, Norwegian and German data protection authorities request a binding EDPB opinion on this so-called "pay or okay" model, to determine what constitutes freely given consent. In this context, February saw noyb and 27 other NGOs issue a joint letter to the EDPB calling for protection of free consent online.

Max Schrems has stated that "under EU law, users have to have a "free and genuine choice" when they consent to being tracked for personalised advertisements. In reality, they are being forced to pay a fee to protect their fundamental right to privacy." Given that users tend to deal with a significant number of websites, apps and companies each month, if other organisations adopt the "pay or okay" model as well, this could lead to significant user costs, adding up to "thousands of Euros per year" according to Schrems.

To date, the CJEU and authorities have made it clear that "pre-ticked boxes" do not constitute valid consent (and that reject buttons on the second layer of a cookie banner are likewise unlawful). However, no such decision has been made yet by the CJEU on the "pay or okay" model. Providers in the adtech value chain will therefore welcome clarity from the EDPB on this topic, to ensure they are clear about their bases for data processing and privacy-friendly models. The topic was listed as one of the items for discussion at the 90th EDPB meeting on 13 February 2024, and yesterday the UK ICO also launched a "call for views" on its regulatory approach to this model. It therefore seems likely that one way or another we will receive much needed clarity on the position later in the year.

1437914b.jpg

On 28 February 2024, the Information Commissioner, John Edwards, gave a warning to organisations about their cookie banner compliance in an opening keynote speech at the IAPP's Data Protection Intensive event. In this speech, the ICO cautioned that it would be prioritising the fair use of cookies this year and that organisations should take heed: the ICO is coming for you (if you have non-compliant cookie banners). This follows the ICO's statement in November 2023 that it had written to organisations running the UK's most visited websites, warning them that they faced enforcement action if they failed to give users fair choices over whether or not to be tracked for personalised advertising.

Giving an insight into its investigations, the ICO revealed that there were potentially non-compliant cookie banners on 53 out of the top 100 websites but, following threatened enforcement action by the ICO, this number will soon go down to just 11. Although this represents a high success rate with respect to the Top 100, the ICO also acknowledges that, due to the sheer volume of websites out there, it will need to automate its processes for assessing cookie banner compliance going forward in order to drive real change. To that end, the ICO announced that it will be working on how to monitor and regulate cookie compliance "at scale" both internally and with technical experts. The ICO's (rather ominous) last word on cookies was therefore: "Our bots are coming for your bots"

For organisations currently operating websites with non-compliant cookie banners and looking for guidance, the ICO's key message is that "it must be just as easy to reject all non-essential cookies, as it is to accept them". This is likely to be welcome news for all website users (i.e. all of us) and will hopefully signal a new era where a "reject all" button will be front and centre (or at least next to an "accept all" button) on all website cookie banners.

1437914c.jpg

February also saw the Upper Tribunal hear an appeal from the ICO against the First-tier Tribunal's ruling in relation to its enforcement notice against Experian, with the ICO alleging that the First-tier Tribunal did not properly consider whether Experian had breached its transparency obligations under the GDPR.

By way of a quick recap of the case: following a two-year long investigation, the ICO concluded in October 2023 that Experian had processed the personal data of around 51 million individuals in an intrusive manner and in a way in which they would not expect, with unclear accompanying fair processing information (where this was even provided). The ICO also said that the lawful basis of legitimate interests, upon which Experian relied to process the personal data, was not available. Experian appealed the enforcement notice and the First-tier Tribunal ruled partially in favour of Experian, overturning the enforcement notice and replacing it with a slimmed-down substitute notice. The ICO then appealed the First-tier Tribunal's ruling.

The First-tier Tribunal previously accepted Experian's submission that "the worst outcome of Experian's processing in terms of what happens to the data at the end of the process is that an individual is likely to get a marketing leaflet which might align to their interests rather than be irrelevant". However, counsel for the ICO put forward to the Upper Tribunal that the First-tier Tribunal had incorrectly focused on the consequences of Experian's processing activities rather than taking into consideration the expectations of data subjects. On the other hand, counsel for Experian put forward that it is the impact of an organisation's processing activities on data subjects which should be the main consideration when assessing transparency requirements so that underlying policy objectives continue to be proportionate.

There was also back and forth in relation to whether Experian's processing was intrusive (which is at the core of the ICO's case). The ICO challenged the First-tier Tribunal's assessment on this whilst Experian reiterated the First-tier Tribunal's findings against the ICO on this point.

It remains to be seen who will come out on top as we await the decision of the Upper Tribunal.

1437914d.jpg

The French data protection authority has fined Amazon's logistics subsidiary in France €32 million for breaking data protection laws.

Amazon France Logistique ("AFL"), a subsidiary of Amazon EU SARL, is responsible for managing Amazon's large French distribution centres (where parcels are received, stored and prepared for delivery). Employees in AFL warehouses were required to use individual scanners, which continually collect data on: (i) how quickly items are scanned; and (ii) how much downtime takes place between scans. The scanners enabled AFL to report potential or actual errors by employees and to monitor their productivity in real time. AFL stored this data for 31 days and used it to plan work schedules, regularly assess its employees and to identify needs for training. AFL also deployed video surveillance at certain warehouses.

In November 2019, following several media reports on AFL's practices, the French Data Protection Authority (the CNIL) began an investigation, including a series of site inspections. In July 2023, the CNIL held that AFL had committed several breaches of the EU GDPR. In particular:

  • Article 5.1c – Failure to comply with the principle of 'data minimisation' in the retention of all the data from scanners for 31 days, rather than retaining only aggregated data which would have achieved the same result;
  • Article 6 – Failure to have a lawful basis for processing of personal data gathered through the monitoring activities – the CNIL considered AFL was unable to rely on legitimate interests as the monitoring activities were disproportionate;
  • Articles 12 and 13 – Failure to provide access to the privacy policy for temporary workers, and a failure to provide the necessary information to employees and visitors to those warehouses where video surveillance was deployed; and
  • Article 32 – Failure to ensure that personal data gathered was sufficiently secure where the video surveillance software had inadequate passwords and account sharing was prevalent.

As a result, AFL were issued with a fine of €32 million. For key takeaways from the decision please refer to our related blog here.

1437914e.jpg

Online safety and, by extension, content moderation remain hot topics in 2024; the EU's Digital Services Act entered into force for all in-scope online platforms on 17 February 2024 and, in the UK, Ofcom's consultations are underway regarding its codes of practice under the Online Safety Act (OSA). Content moderation is one of a range of measures that Ofcom envisages in-scope organisations must undertake to comply with their safety duties under the OSA. It principally involves the assessment of user-generated content on online services to determine whether certain standards are met (plus actions required as a result of that analysis), and it can involve the processing of users' personal data.

The ICO has acknowledged that compliance with one regime does not necessarily amount to compliance with the other. To that end, on 16 February 2024, the ICO published its first in a series of guidance on the intersection between the two regimes, to assist organisations with conducting content moderation to meet their online safety duties whilst also complying with data protection legislation. The guidance forms part of the ICO's ongoing collaboration with Ofcom on data protection and online safety technologies.

The ICO outlines a number of principles to bear in mind as part of any content moderation exercise, including the need to: (i) conduct a DPIA before processing personal data, given the potentially high risk to individuals' rights and freedoms; (ii) identify a lawful basis before processing; (iii) process personal data fairly, without unjustified adverse effects; (iv) use personal data in a way that is adequate, relevant, and only as necessary; (v) clearly inform individuals about how their data is used, the decisions made using that data and how they can exercise their data subject rights; and (vi) notify users if content moderation involves solely automated decision-making with legal and similarly significant effects. Some might say the guidance effectively tells people to comply with the law!

The guidance will be updated (as necessary) to reflect technological developments and Ofcom's finalised codes of practice. For further information on online safety regulation please refer to our Global Online Safety Insights Page and a recording of our recent Lexology Masterclass is available here.

1437914f.jpg

Following landmark developments in 2023, the international spotlight remains firmly on AI regulation as we enter into 2024. February alone has not only seen the European Parliament Committees (IMCO and LIBE) vote overwhelmingly to adopt the EU AI Act and confirmation that the Indian Government is working on a draft regulation for artificial intelligence, but also the long-awaited UK Government response to its AI Regulation White Paper released on 6 February 2024 ("Response") and the House of Lords Communication and Digital Committee report on "Large language models and generative AI" ("Report") published on 2 February 2024.

There are few surprises in the Response and Report; with the UK still forging its own path towards the regulation of AI. In contrast to the centralised legislative framework set out in the EU AI Act, the Response and Report largely re-iterate and build on the original adaptable, pro-innovation, sector-led approach set out in the Government's March 2023 AI White Paper. That said, the Government seems to have taken on board a couple of focus areas from the EU approach. These include an acknowledgement that legislative action will be required once the risks associated with the technology have matured. For the first time, the Response also sets out initial thinking for future targeted, binding requirements for the most advanced highly capable general purpose AI systems. This is principally because the wide-ranging potential uses of these systems challenge the current context-led regulatory approach.

For further information on the key take aways from the Response and Report please refer to our blog here.

1437914g.jpg

Existing regulators retain a key role in implementing the UK's agile approach to regulating AI in both the Report and the Response mentioned above. The Government has empowered them to create targeted measures in line with five common principles and tailored to the risks posed by different sectors. Regulators have also been asked to publish their strategic plans for managing the risks and opportunities around AI by the end of April 2024. To avoid a patchy approach between regulators, other priorities include strengthening the central coordination mechanisms for UK regulators in AI and developing the expertise of the AI Safety Institute (both nationally and internationally).

Chief among those regulators is the UK's data protection authority, the Information Commissioners' Office. In January, the ICO launched a consultation series on how data protection law should apply to the development and use of generative AI. The first chapter of the consultation focussed on the lawful basis for web scraping to train generative AI models and closed on 1 March 2024. February saw the ICO release a second chapter of the consultation series, which looks at the application of the "purpose limitation" principle throughout the various stages of the generative AI lifecycle.

The consultation considers that the generative AI model lifecycle involves several stages, each potentially processing different types of personal data for distinct purposes. Clear definitions of the purpose of processing personal data at each stage allows organisations to understand the scope of each processing activity and assess compliance with data protection legislation. When reusing training data, developers must ensure the purpose of training a new model aligns with the original data collection purpose. The ICO also emphasises that developing a generative AI model, and creating an application based on such a model, are considered different purposes under data protection legislation. The ICO therefore suggests that before processing begins, developers should (i) define clear, specific, and explicit purposes for each lifecycle stage; and (ii) explain what personal data is processed at each stage and why it's necessary for the stated purpose. The consultation closes on 12 April 2024.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.