ARTICLE
16 October 2025

AI In Health And Care: Key Legal Risks To Navigate In Managing Data Privacy

GW
Gowling WLG

Contributor

Gowling WLG is an international law firm built on the belief that the best way to serve clients is to be in tune with their world, aligned with their opportunity and ambitious for their success. Our 1,400+ legal professionals and support teams apply in-depth sector expertise to understand and support our clients’ businesses.
The recent announcement by the Government of the UK's Modern Industrial Strategy sees a commitment to establishing artificial intelligence (AI) growth zones...
United Kingdom Technology
Gowling WLG are most popular:
  • within Compliance, Wealth Management and Tax topic(s)
  • with Senior Company Executives, HR and Finance and Tax Executives
  • with readers working within the Healthcare industries

The recent announcement by the Government of the UK's Modern Industrial Strategy sees a commitment to establishing artificial intelligence (AI) growth zones to attract investment in AI infrastructure and build capabilities. One of the key sectors set to benefit from this decision, highlighted in our earlier discussion of the strategy, is life sciences and healthcare, as MedTech manufacturers and healthcare service providers continue to embrace AI throughout the industry.

In recent weeks, the Medicines and Healthcare products Regulatory Agency (MHRA) has established the UK National Commission on the regulation of AI in healthcare to develop a new regulatory framework for AI in this sector. The main objective is to transform UK healthcare into the "most AI enabled healthcare system in the world".

In this article, we discuss some of the key risks to address in relation to AI and data privacy in health and care. Taking recent developments into account, we highlight important aspects of current UK data protection and privacy regulations, and share best practices for MedTech manufacturers and healthcare service providers to consider.

The rise of AI in healthcare

The use of medical technology continues to evolve at a rapid pace and was catalysed following the COVID-19 pandemic. These developments have opened people's eyes to the greater use of technology within the sector and normalised the collection and use of personal health data.

Wearable technology, such as 'smart' watches and fitness apps, has also seen a huge increase in popularity and are now a regular part of many people's daily wardrobes. This immediate access to personal health data – the number of steps being taken, calories consumed, heart rate and all types of general health information – has contributed significantly to raising awareness of ongoing healthcare in everyday life and promoting healthy habits through activity tracking and gamification.

This intersect between monitoring personal health and the use of technology has contributed to greater attention on preventative care among consumers – something reflected in healthcare more generally and the recently published 10 Year Health Plan for England. This long-term plan focuses on three key shifts:

  • From hospital to community: enabling care closer to home, supported by new diagnostics and digital tools.
  • From analogue to digital: embedding data and AI to improve outcomes and reduce pressure on frontline staff.
  • From sickness to prevention: using genomics, early detection, and personalised medicine to keep people healthier for longer.

Against this backdrop, the level of innovation and investment in technology in healthcare is rapidly changing diagnostics, monitoring and treatments. As society is getting used to more and more AI and what it's able to achieve, we're also seeing how it can innovate healthcare provision and improve patient outcomes. A prime example being blood sugar level monitors through smart devices and Apps for people with diabetes: an example of the sort of technology that can be adopted in the monitoring and prevention of healthcare conditions.

Under the move from analogue to digital, there is a drive to implement a national infrastructure for electronic patient records (EPRs) and also integrate data across different care settings. But there is a long way to go. A 2023 survey highlighted that 4% of NHS Trusts used paper records only and 71% used both paper and electronic records. This presents real risks in not being able to access a patient's full medical records but also presents a great opportunity.

Health records and the processing of health data is an area where AI can make a big impact and bring benefits to both patients and clinicians, but this does mean having strong safeguards in place to manage data securely and prevent exposure to risk. Consumers and patients require assurance over how their data is being used and protected, and healthcare providers will need confidence in the services and technology they are using.

What are key areas of risk when using AI to process data?

While technology and AI bring huge benefits to the health and care sector, the vast data sets and highly sensitive nature of the data requires careful handling. Critical questions raised relate to: How is the data collected? Who controls it? Is consent required and, if so, how is consent given? How is the data used and processed accurately and responsibly? And what safeguards are in place for protecting privacy?

Three key common areas of risk are:

Data protection and avoiding data breaches: The health and care sector, like many industries where large volumes of data is handled, is a prime target for cyberattacks. A breach not only compromises patient confidentiality but can also erode trust in digital health systems and, consequently, impede innovation and advances in care. The legal and reputational consequences of such a breach can be significant.

Algorithmic bias: AI systems trained on incomplete or non-representative data can perpetuate or even exacerbate health inequalities. There is a greater probability of erroneous results without the proper measures being put in place from the outset. For example, an algorithm trained predominantly on data from one ethnic group may underperform for others, as it is learning from data that is not fully representative of the population. This can lead to disparate patient outcomes due to missed diagnoses, incorrect risk predictions, and so on.

Automated decision making: AI algorithms bring the potential for rapid information processing, with decision making made with little to no human judgement involved. Common areas of risk that can present challenges are bias from the quality of training data, lack of transparency and privacy concerns due to extensive data requirements. This lack of transparency can hinder clinical adoption and patient acceptance.

We explore here how the UK legislation covering data privacy for the use of AI applies to health and care services and some of the considerations that can help organisations address these risks.

UK data protection and privacy regulations

In the UK, health-related personal data is one of the types of personal data categorised as "special category personal data" under the UK General Data Protection Regulation (UK GDPR). The UK GDPR's general principles are agnostic by sector and technology, but due to the sensitive nature of health-related data, UK data protection law imposes additional conditions on the processing of health related data. These include the requirement for high levels of security.

The Data (Use and Access) Act 2025 (DUAA) received Royal assent on 19 June this year and (among other things) makes certain changes to the UK GDPR and the Data Protection Act 2018. Key changes are summarised in our earlier article on the new Act. From a healthcare perspective, these are notable in enhancing healthcare data use by establishing mandatory information standards for health and social care IT systems.

The Act also creates a statutory framework for smart data sharing schemes and establishes a trust framework for digital identity verification services, relevant for secure online access to health information. Other reforms include provisions about data use and access in health and social care, automated decision making (ADM), and broadening what is meant by 'scientific research'.

The updates are particularly relevant in the context of advances in using AI in the field of diagnostics and medical research. AI is already used for vaccine discovery purposes and dramatically decreases the research time required compared to conventional methods. It's also used by medical practitioners who benefit from gaining access to wider volumes of data in researching and diagnosing cancers and rare diseases, for example. AI brings the advantage of obtaining results from a system trained on an extensive bank of data for a specific condition to aid diagnosis.

Fundamental to these kind of research practices is ensuring transparency, data processing and data security requirements are met – keeping the ultimate focus on safeguarding patient data and improving health outcomes. For public safety and confidence in the technology being used, there is of course a requirement that the data being processed and analysed is accurate. Additionally, it's important to maintain patient trust and ensure they understand how their data is being used and processed which increases trust and confidence in the outcomes.

Safeguarding from data breaches

When it comes to managing data processing, data security and data breaches, prevention is always better than cure. It's important to keep abreast of the regulatory landscape, understand how current requirements apply to the product/device or service and take a proactive approach to avoiding potential risk areas.

The responsibility to ensure protection from data breaches remains the same for the healthcare sector as in other industries, regardless of the use of AI. However, when applied to healthcare and other sensitive information, the rules require a higher level of technical and organisational security measures to be in place to protect the information.

In the UK, the relevant industry regulators for each sector have oversight in relation to the use of AI. So, for the healthcare sector, various bodies including the Information Commissioner, the Care Quality Commission (CQC) and the MHRA outline requirements and guidance.

As noted above, the MHRA has established the UK National Commission on the regulation of AI in healthcare. We will continue to monitor developments in relation to that and will highlight any developments in future articles.

Whether, or to what extent, the UK adopts any specific AI legislation remains to be seen. Nevertheless, we can expect continued focus on AI by the UK data protection regulator, whether through guidance or enforcement action.

Data protection by design

The UK GDPR requires organisations to consider data protection principles and the safeguarding of individuals' rights before a processing operation commences (and on an ongoing basis after commencement). This is known as 'data protection by design'.

In short, thinking about data protection, compliance and privacy implications right from the outset of a project better ensures that risks are considered and factored into the technology design and processes. This is not just a concept, it's a fundamental part of current UK data protection legislation. When properly applied, it can significantly reduce many of the risks outlined in the previous section.

Product manufacturers and healthcare service providers will need to understand their compliance responsibilities from the outset and ensure they factor into their AI use cases the need for robust security protocols, data privacy and anonymisation (where possible), data protection impact assessments and more. Practical considerations may include providing information to individuals for data processing, and the use of data sharing agreements between parties delivering AI projects/solutions.

Algorithmic bias

In addition to considerations around security and 'data protection by design', it is important to consider what data the AI system has trained on in order to ensure that it's robust, compliant and avoids algorithmic bias. When using patient data to train an AI model, it's essential to consider a number of factors, including: the quality and accuracy of the data, whether individuals are aware their data is being used, how representative the dataset is, and how long the data will be stored.

Issues arising from algorithmic bias include discriminatory outcomes such as misdiagnosis based upon race, gender etc. or unequal access to treatment or resources.

AI developers may wish to audit AI systems using sensitive data to detect bias. In doing so, however, they will need to ensure that the processing involved in the audit is justified and safeguarded.

Automated decision making

The UK GDPR applies across all sectors and types of organisation – but with additional safeguards, as noted above, for special category personal data. For AI, the UK GDPR's Article 22 is particularly relevant: with some exceptions, it grants individuals the right not to be subject to decisions based solely on automated processing.

Automated decision making in healthcare has the potential to deliver enormous benefits in terms of efficiency and quality. Examples of automated decision making could include risk stratification, predictive diagnostics and resource allocation. These decisions significantly affect individual patients and so the Article 22 protections are especially relevant.

Under existing law, there are requirements around both transparency and accountability when using healthcare data and processing it via AI systems. Alongside the points raised above in relation to training data to minimise algorithmic bias, having human oversight and intervention mechanisms in place is crucial. UK data protection law contains complex rules on when automated decision making can occur lawfully. Even where it is permitted under the legislation, patients still have the right to request human intervention.

By its nature, an AI model may produce more accurate results than a human, assuming the model is trained on a large data set of accurate, representative training data. It's important when processing data in this way that there is transparency around the logic used in any automated decision, and that the significance and potential consequences of the resulting decisions are explained fully. Patients should understand how the decision-making system works. Patients should be given 'meaningful information about the logic involved, as well as the significance and the envisaged consequences' of the automated decision making.

Where human intervention occurs it must be meaningful, using the AI and drawing on wider context and expertise to make informed final judgments. So, the process should not simply result in a tick box exercise that mirrors the fact that the 'computer said no', therefore it must be a no.

Alongside Article 22, the UK Information Commissioner's Office (ICO) has issued specific guidance on AI and data protection, emphasising fairness, transparency, and accountability. It encourages organisations to conduct Data Protection Impact Assessments (DPIAs) (and in fact they may be required), which will help to identify and minimise privacy risks associated with processing this category of data. Risk mitigation measures should include consideration of the extent to which the organisation might be able to anonymise or pseudonymise data. Other measures might include, for example, bias audits to regularly test AI systems for bias, and the use of transparent consent processing.

Building trust through transparency

Public trust in AI will grow over time and a key element of building trust is transparency. The more patients are informed about the process, how an AI model has been developed and how decisions are made, the greater the trust in the end results. Trust is built slowly and destroyed quickly, so it's important to ensure compliance with the standards set and to follow best practices. The end-goal being to continue driving innovation forward and creating more positive outcomes for patients.

What do current data privacy requirements mean for your AI project?

In such a rapidly evolving area as AI, a key challenge for MedTech manufacturers and healthcare providers is understanding how the existing data privacy and data protection legislation maps on to what they're trying to achieve. UK data protection law provides a consistent approach to compliance that is applicable across all areas. Taking these requirements on board, as well as those from the industry-specific regulators, requires close examination and the ability to understand how the law applies to your individual project and circumstances. Seeking legal advice early is important to ensure key obligations, compliance and additional guidance are factored in at the design stage – helping avoid potential issues down the line.

Read the original article on GowlingWLG.com

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More