ARTICLE
21 May 2024

MHRA Sets Out Its AI Regulatory Strategy

AP
Arnold & Porter

Contributor

Arnold & Porter is a firm of more than 1,000 lawyers, providing sophisticated litigation and transactional capabilities, renowned regulatory experience and market-leading multidisciplinary practices in the life sciences and financial services industries. Our global reach, experience and deep knowledge allow us to work across geographic, cultural, technological and ideological borders.
The UK Medicines and Healthcare products Regulatory Agency (MHRA) has published its strategic approach to artificial intelligence (AI).
United States Food, Drugs, Healthcare, Life Sciences

The UK Medicines and Healthcare products Regulatory Agency (MHRA) has published its strategic approach to artificial intelligence (AI). The publication is in response to the request from the Secretaries of State of DSIT and DHSC dated 1 February 2024, in which the MHRA was asked to provide details about what steps it is taking in accordance with the principles and expectations of the Government's pro-innovation approach set out in the white paper published in 2023. Further information is set out in our previous post.

The strategy provides information on how the MHRA views the risks and opportunities of AI from three perspectives:

  • MHRA as a regulator of AI products
  • MHRA as a public service organisation delivering time-critical decisions
  • MHRA as an organisation that makes evidence-based decisions that impact on public and patient safety, where that evidence is often supplied by third parties

The document is likely to be of particular interest to AIaMD manufacturers as it sets out in detail current and proposed regulations and guidance, and areas where this is likely to be tightened. Following the launch of the strategic approach, the government also published details of the AI Airlock regulatory sandbox, discussed in another post.

With a raft of measures relating to AI being published and additional measures expected in the next couple of years, pharmaceutical and medical device companies operating in the UK need to continually review how they will be impacted and respond appropriately.

1. Regulation of AIaMD products

The MHRA confirms that AIaMD products must conform to the provisions of the UK Medical Devices Regulations 2002 (or similar EU law during the transitional period) before they can be placed on the market in the UK. These regulations are supplemented by guidance including that on Medical devices: software applications (apps). The UK also chairs the International Medical Devices Regulators Forum's working group on AI and Machine Learning enabled medical devices with the aim of promoting greater harmonisation of regulation across the world.

The UK regulations are being updated following Brexit. Under the revised regulations, MHRA states that many AI products that are currently in the lowest-risk category (Class 1) "will be up-classified" to provide greater scrutiny throughout the product lifecycle. However, it adds that it is mindful of the need to adopt "a proportionate approach" for AIaMD, following its approach in the 2021 Software and AI as a Medical Device Change Programme – Roadmap.

The MHRA also proposes to publish clear guidance on cyber security by spring 2025.

To promote transparency and explainability of AI, the MHRA requires manufacturers to provide a clear statement of the purpose of the device. The MHRA has provided guidance on crafting an intended purpose in the context of software as a medical device. It will also supplement its existing guidance on applying human factors to medical devices in spring 2025.

To ensure fairness, the MHRA raises the need to ensure equitable access following the Independent Review of Equity in Medical Devices. It encourages manufacturers to refer to ISO/IEC TR 24027:2021 Information technology, Artificial intelligence (AI), Bias in AI systems and AI aided decision making and relevant IMDRF guidance.

The new regulations will also strengthen and clarify the obligations for manufacturers, conformity assessment bodies, the MHRA and other economic operators in the supply chain to strengthen accountability and governance. For AIaMD, the MHRA – with its partner agencies in the US and Canada – recently published guiding principles for the use of Predetermined Change Control Plans (PCCP) to enable full traceability and accountability of manufacturers for how AI models meet intended use as well as the impact of changes. It intends to introduce PCCPs in the future core regulations, initially on a non-mandatory basis.

New regulations due by the summer will also strengthen legal requirements for manufacturers to report incidents to the MHRA.

2. MHRA as public service organisation

The MHRA says it is exploring the use of supervised machine learning in its initial assessment of the completeness, consistency and quality of documents submitted as part of applications for marketing authorisation or approval "to provide a score or a recommendation for each criterion or standard".

It is developing an MHRA data strategy, which will cover safely and responsibly applying advanced analytics and AI within the business. This includes a deliverable on large language models and generative AI across the business.

However, the MHRA notes that there are "significant uncertainties" around best practice for the use of large language models and generative AI. It will explore the role of analytical approaches to "enhance and extend" the generation of actionable insights from real-world data and also the roles of relevant methodologies in vigilance systems.

To help protect consumers from fraudulent medicinal products, the MHRA is developing a Medicines Website Checking tool to allow reporting of platforms that are suspected of selling fake or illegal medicines and medical devices. It is also beginning to prototype products with AI in this area.

3. Evidence-based decisions

To regulate effectively, the MHRA says it needs to understand the use of AI in how organisations undertake their activities. It is collaborating with other international regulatory bodies and the pharmaceutical industry through the Council for International Organizations of Medical Sciences to develop best practice in the use of AI across organisations.

The MHRA says that AI may affect the pace at which new medicines can be developed, impact clinical trial design and enable personalised medicines. The MHRA will ensure that its "regulatory pathways are sufficiently agile and robust to respond to these changes".

Publication of the ICO's strategic approach to AI

On 1 May 2024, the ICO published its strategic approach to AI. Like the MHRA, this was also in response to the 1 February 2024 letter from the Secretaries of State of DSIT and DHSC. The ICO explains how the principles set out in the Government's white paper already largely mirror the data protection principles that the ICO regulates, and describes the work it has done and has planned to implement these principles. This includes the publication of guidance on how data protection law applies to AI; provision of advice and support for AI innovators through its regulatory sandbox, innovation advice service, innovation hub and consensual audits; and regulatory action and enforcement powers to promote compliance and safeguard the public.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More