"Bridging" the UK Gap: Adequate for now, but reservation remain
Another month, another series of international data transfer developments.
First up: in an effort to facilitate the flow of personal data between the UK and the US, on 21 September the UK Secretary of State for Science, Innovation and Technology published the Data Protection (Adequacy (United States of America) Regulations 2023, implementing the so-called UK-US data bridge (the "Data Bridge"). The Data Bridge determines that the US provides an adequate level of protection for transfer proposes, where the transfer is from an organisation located in the UK to an organisation located in the US and listed on the EU – US Data Privacy Framework (the "Framework") as participating in the UK Extension to the Framework. The Data Bridge will become effective on 12 October 2023 and follows the EU Commission's recent US adequacy decision on 10 July in relation to the Framework (for further details refer to our related blog here).
By relieving some of the red-tape duties placed on American organisations, the Government hopes to speed up processes and reduce costs for UK entities engaging in business with US organisations. Much like other UK "data bridge" arrangements in place with other key partner countries, including the Republic of Korea. However, as with the Framework, we think it is unlikely that we will see a mass shift in the UK away from the current reliance on mechanisms such as the EU Standard Contractual Clauses ("SCCs") supplemented by the UK Addendum or the UK International Data Transfer Agreement. – particularly in light of the significant time and resource organisations have spent repapering existing contracts and transfer mechanisms as a result of the Privacy Shield being invalidated in the Schrems II case and the new SCCs being published. In addition, there is the significant risk that the EU – US adequacy decision could be amended and/or withdrawn in due course following legal challenge (see below). By extension, what would that mean for the UK's Data Bridge?
Closer to home, the Data Bridge has already received its own criticism. Although the ICO has indicated in its Opinion that the Data Bridge provides an adequate level of data protection, it also flags four areas of concern that could pose risks to UK data subjects. These areas include: (i) compliance with the UK Extension to the Framework; (ii) compliance with the Executive Order signed by President Biden; (iii) the effectiveness of US oversight of the UK Extension to the Framework; and (iv) significant changes to Federal and/or State laws. These concerns may well further support challenges to the Data Bridge going forward.
Data transfer impact assessments ("TIA") will now also no longer be required for UK – US data transfers relying on the Framework and the Data Bridge. Although the Data Bridge does not remove the requirement to conduct TIAs for other transfer mechanisms, it is anticipated that it will simplify the process given that the European Commission (as well as the UK Government and ICO) has effectively already now considered the US laws in place.
For more information regarding the Framework please refer to our blog here and for more information on the Data Bridge please visit the UK Government's Factsheet.
EU - US Data Privacy Framework challenge, as expected
Still on the theme of international data transfers: as predicted, on 6 September, a member of the French Parliament, Philippe Latombe, lodged an application in the CJEU to annul the EU – US Data Privacy Framework (the "Framework"). The Framework was approved by the European Commission in July this year. Whilst it provided some much needed certainty around international data transfers in the short term, uncertainty remained in the medium term (refer to our previous blog here) and this uncertainty has now been exacerbated by the challenge.
In his statement, Latombe claimed that the Framework violates the EU's Charter of Fundamental Rights by insufficiently protecting the right to private and family life. He fears that mass collection of personal data feeds the existing concerns around how individuals' information is being used under the US's surveillance rules.
This follows previous criticism of the draft Framework by prominent authorities, such as the EDPB and the European Parliament (in a resolution adopted in May the latter concluded that the draft Framework was not sufficient to justify an adequacy decision). The Chair of the Committee on Civil Liberties, Justice and Home Affairs has also highlighted in the past that the draft Framework contained "significant improvements" but was "missing elements", including those on transparency.
In addition, Austrian privacy activist, Max Schrems, has stated that "the "new" Trans-Atlantic Data Privacy Framework is a copy of [the] Privacy Shield (from 2016), which in turn was a copy of [the] "Safe Harbour" (from 2000). Given that this approach has failed twice before, there was a no legal basis for the change of course...there is little change in US law or approach taken by the EU". Unsurprisingly, Schrems' non-profit organisation, noyb, is also expected to challenge the Framework before the CJEU and has stated that "it is not unlikely that a challenge would reach the CJEU by the end of 2023 or beginning of 2024. A final decision [...] would be likely by 2024 or 2025."
India's new data protection law: What does it mean for international business?
The end of the Summer and start of Autumn has also seen progress globally in reform of data privacy regimes further away from home. First up: after multiple iterations and cycles of review since 2019, August saw India's long-awaited Digital Personal Data Protection Act 2023 (the "DPDPA") receive assent from the President of India and get published in the Gazette of India. Given India is the world's fifth largest economy, the DPDPA will be of importance to a large number of international businesses that operate in India, rely on Indian service providers/group service companies for their operations, or are looking to enter Indian markets.
The DPDPA will replace the current permissive regime that dates back to the 2000s and provides scant protection for personal data. The new regime will come into effect once notified by the Government which may take a staggered approach to implementation. Through the DPDPA, India has sought to forge its own standards and principles for data protection regulation that emulate in some respects principles from other regimes such as the GDPR and the Singapore regime.
However, the DPDPA is not simply a copy and paste. In establishing its data protection regime, Indian lawmakers were keen to ensure it was well-suited to a developing economy like India. However, for multi-nationals, this is another example of "one size" not fitting all when it comes to global data protection compliance across a footprint that includes both GDPR and non-GDPR based regimes.
For further guidance please read our fuller blog articles on the practical implications of the DPDPA for international business and Common Concepts In The Data Protection Laws Of India And Singapore.
Spotlight on sweeping reform of Australian privacy laws
Keeping with the theme of global data protection reform: on 29 September 2023, the Australian Government released its long-awaited response to the Attorney-General's DepartmentPrivacy Act Review Report 2022 for reform of the data privacy framework.
Of the 116 proposals in the Report (which we summarised in this earlier briefing), the Government agrees with 38 proposals, agrees in-principle with 68 proposals and 'notes' 10 proposals. Despite the Government expressing its support/support in principle for most of the proposed reforms, there are nonetheless several steps before their introduction (including the undertaking of a targeted consultation with impacted entities).
We expect to hear more about these next steps after next month's referendum. Any proposals ultimately adopted by the Government will apply in the context of last year's introduction of increased penalties (up to $50 million and sometimes more) and greater regulatory powers (as detailed in this briefing), and will complement other reforms including the 2023-2030 Australian Cyber Security Strategy, the National Strategy for Identity Resilience, and Supporting Responsible AI in Australia.
For further detail on the Australian Government's response please refer to our comprehensive blog post here.
China considers easing cross border data transfer restrictions
On 29 September 2023, the Cyberspace Administration of China ("CAC") released a draft regulation which sets out certain exemptions from the cross-border data transfer restrictions under the Personal Information Protection Law, Data Security Law and Cybersecurity Law.
In particular, in what appears to be a relaxation of the existing rules, organisations will not be required to adopt any of the mechanisms (i.e. CAC security assessment, Standard Contractual Clauses and certification) for certain specified data transfer scenarios. The consultation period will end on 15 October 2023.
For further information please refer to our colleague's more detailed commentary on this development here.
ICO Draft Guidance: Lawful use of biometric data
With the increased use of biometric data (including fingerprint recognition and iris scanning) and closer scrutiny of its use from international regulators, August saw the timely publication of the UK Information Commissioner's Office's ("ICO") guidance on the lawful use of biometric data. The draft guidance is subject to consultation until 20 October 2023 and focuses on how data protection legislation applies when organisations use biometric data in recognition systems to identify individuals (e.g. as part of access controls).
It includes the definition of biometric data under the UK GDPR; when it is considered special category data; its use in biometric recognition systems; and the data protection requirements organisations need to comply with. However, the ICO also makes clear that the guidance is "not intended to be a comprehensive guide to compliance."
Of particular note, the guidance clarifies that not all biometric data is "special category biometric data" – this is only if it is used to uniquely identify an individual. However, even if this is not the purpose, the guidance flags that biometric data processed may still include other types of special category data (i.e. information revealed from the biometric data, such as an individual's ethnic origin). The ICO also states that explicit consent is likely to be the only legal basis for processing special category biometric data, although an alternative option should be available to individuals to ensure that consent is freely given. The guidance also clearly considers biometric data and recognition systems in the context of AI (e.g. in terms of "input data" to train an AI system), which is unsurprising given the emerging technology is currently under the spotlight.
Whilst this first phase of guidance and any clarity it provides will be welcomed by industry, it lacks practical guidance in some areas (for example, while the guidance confirms the need for security, it does not offer practical solutions around what this should look like, aside from confirming that biometric data must be encrypted). The second phase of this guidance (biometric classification and data protection) will include a call for evidence early next year.
For further commentary from HSF's Global Head of Data and Privacy Miriam Everett, please refer to this article that was first published in Global Data Review.
Global statement on expectations for data protection from unlawful data scraping
On 24 August 2023, the UK Information Commissioner's Office (the "ICO") and eleven other data protection authorities ("DPAs") published a joint statement which highlights the data privacy risks posed by unlawful data scraping of publicly accessible personal data, particularly on social media.
'Data scraping' is an automated process which imports large amounts of information from a website for other uses, most of which are legitimate, but some of which may be unlawful. In particular, the statement highlights the risk of scraped personal data being used for: (i) targeted cyber-attacks; (ii) identity fraud; (iii) individual profiling and surveillance; (iv) foreign intelligence gathering; and (v) unwanted spam marketing messages. More broadly, it also warns of users' personal data being scraped without their knowledge and used for unexpected purposes, such as after the deletion of their social media accounts. There has also been a rise in data scraping practices to generate "input data" on which to train AI systems.
The statement re-iterates the DPAs' expectations that social media companies ("SMCs") maintain vigilance and implement multi-layered technical and procedural controls to mitigate these risks. Among these, the statement suggests that SMCs designate specific teams to monitor scraping activities, 'rate limit' the number of visits by one account, monitor particularly active new accounts and take action to tackle 'bot' activity.
Finally, the statement outlines a number of steps that individuals can take themselves to minimise the privacy risks from data scraping. These include minimising the nature and extent of data shared on social media and understanding and managing the operator's privacy policy and privacy settings.
The statement was signed by DPAs representing the UK, Australia, New Zealand, Canada, Hong Kong, Switzerland, Norway, Columbia, Jersey, Morocco, Argentina and Mexico. SMCs were invited to submit feedback within one month of the date of the statement demonstrating how they comply with its expectations.
AI regulation remains EU priority, as UK Report on AI raises issues, but answers still pending
It wouldn't be a monthly Data Wrap without an update on the regulation of AI! On 12 September 2023, European Commission President von der Leyen presented the 2023 State of the Union address which, as anticipated, included a focus on prioritising the responsible use of artificial intelligence. This is set against the global policy discussions around AI at G7 and G20 in early September, the impending UK Artificial Intelligence Safety Summit and the publication of the House of Commons UK Science, Innovation and Technology Select Committee The Governance of Artificial Intelligence: Interim Report (the Report) on 31 August 2023.
The Report follows a recently conducted inquiry into the impact of AI on several sectors and identifies twelve key challenges with the use of AI, including bias, privacy, misrepresentation, access to data, and the "black box" challenge. Whilst these are not new concepts, they are a useful list of challenges to consider, however, the Report does not go so far as to offer solutions to address them at this stage. The Report encourages the UK Government to move directly to legislate for AI, rather than to apply the approach set out in its White Paper of March 2023, which envisaged five common principles to frame regulatory activity, guide future development of AI models and tools, and their use. These principles were not to be put on a statutory footing initially but were to be "interpreted and translated into action by individual sectoral regulators, with assistance from central support functions". For further detail on the White Paper please refer to our previous blog here.
The Report calls for the government to address each of the twelve challenges outlined and makes clear the growing imperative to accelerate the development of public policy thinking on AI "to ensure governance and regulatory frameworks are not left irretrievably behind the pace of technological innovation". The Report concludes that "a tightly-focussed AI Bill in the next King's Speech would help, not hinder, the Prime Minister's ambition to position the UK as an AI governance leader". The right governance frameworks must be established rapidly and effectively to ensure the UK is positioned amongst other leaders in international AI initiatives and to avoid regulation from other jurisdictions becoming the default.
For further information please refer to our fuller blog here.
UK National Risk Register 2023: Cyber risk in Critical National Infrastructure
The UK Government published its 2023 National Risk Register ("the Risk Register") in August 2023, listing the 89 main publicly acknowledged risks facing the UK. The Risk Register reflects the principles of the UK Government Resilience Framework to communicate risk information, in order to prepare practitioners, businesses and other organisations for risks. Cyber-attacks on gas infrastructure, electricity infrastructure, civil nuclear, fuel supply infrastructure, the health and social care system, the transport sector, telecommunications systems and one or more UK retail banks are cited as key risks in the report.
Of particular note, the key findings of the report include that energy security is a particular growing risk of concern, and three major risks have been graded and made public, including the disruption of energy supplies.
Cyber-related risks therefore feature prominently and given the volume of cyber-attacks on critical national infrastructure globally, the UK Government has previously confirmed its intention to revise the cyber security obligations contained in the Network and Information Systems ("NIS") Regulations 2018. However, in the absence of a draft of the UK equivalent of the EU's "NIS2" Directive, organisations currently have only the material which the NIS Competent Authorities have published (and updated) to indicate what Critical National Infrastructure organisations must do.
We discuss the Risk Register in further detail in our article, the "Cyber risk in Critical National Infrastructure features prominently in UK 2023 National Risk Register".
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.