Answer ... The United Kingdom’s approach to regulating AI emphasises the role of sector-specific regulators as opposed to a single, overarching regulatory framework, so this question is particularly apt for the United Kingdom. In short, AI is getting a lot of attention from sectoral regulators at the moment and the situation is moving quickly.
(a) Healthcare
In healthcare, the UK regulatory framework is complex. It has been described as a “bewildering array of bodies for innovators to navigate”. Given the advantages of creating a coherent regulatory environment, however, there are efforts to coordinate and simplify to create clear pathways for AI companies to obtain regulatory approval for their AI systems.
Regulatory complexity is therefore a genuine legal issue at the moment. Given the obvious sensitivities involved in healthcare AI, other central legal issues include privacy, data ethics, transparency and accountability.
Key regulatory bodies include:
- the Medicines and Healthcare Products Regulatory Agency, which regulates medicines and medical devices;
- the National Institute for Health and Care Excellence, which provides high-level guidance on improving health and social care in the United Kingdom; and
- the Care Quality Commission, which regulates the provision of health and social care services in the United Kingdom.
(b) Security and defence
A key trend in security and defence is the UK government’s increasingly cautious approach to foreign investment and national security, which has significant implications for AI as well as the broader technology sector.
Historically, the UK government has taken a permissive approach to foreign investment in UK industry. The United Kingdom has been an outlier among western jurisdictions in not having standalone foreign investment rules, such as the Committee on Foreign Investment in the United States and the European Union’s Foreign Direct Investment Regulation.
Increased geopolitical tensions in recent years – demonstrated, for example, in the decision taken in mid-2020 to ban Huawei technology from the United Kingdom’s 5G infrastructure – have led to a more cautious approach.
In November 2020, the UK government announced far-reaching proposals in the form of the National Security and Investment Bill, which marked a step change in approach. The bill, which is currently going through the UK legislative process, outlines a strict mandatory notification procedure and broad powers for government to ‘call in’ sensitive transactions involving foreign investors.
Importantly for would-be non-UK investors in UK AI, AI is specifically called out as a sector facing mandatory notification, along with autonomous robotics, cryptographic authentication and quantum technologies.
(c) Autonomous vehicles
Reforming the United Kingdom’s legal and regulatory environment to promote the development and adoption of connected and autonomous vehicles (CAVs) has been a UK government priority area for some years now.
Early progress was made with the enactment of the Automated and Electric Vehicles Act 2018, which made changes to the United Kingdom’s compulsory motor vehicle insurance regime to enable CAVs to be insured like conventional motor vehicles.
In Q4 2021, the English and Scottish Law Commissions are expected to provide the final report in their three-year review of the United Kingdom’s legal framework for CAVs. The conclusions of this review are likely to set the agenda for legal reform in this area in the years to come.
The Centre for Connected and Autonomous Vehicles – a joint unit of the UK government’s Department for Transport and Department for Business, Energy & Industrial Strategy – has a broad mandate to promote the United Kingdom’s CAV ecosystem. On the regulatory side, its early work has included simplifying the rules around CAV testing on UK roads.
(d) Manufacturing
When products are manufactured and placed on the market in the United Kingdom, they generally fall within the scope of the United Kingdom’s product safety legislation. This is true when AI is incorporated into those products.
In simple terms, the United Kingdom’s product safety legislation sets out a framework of standards and requirements products must meet, as well as rules relating to traceability, responses if a product is found not to be safe and the powers of authorities to take action.
Key legislation includes the following:
- The General Product Safety Regulations 2005 (GPSR) apply to consumer products not otherwise addressed by sector-specific product legislation. A key feature of the GPSR is an obligation not to place a consumer product on the UK market unless that product is safe; and
- Sector-specific product legislation applies both to consumer and non-consumer products (eg, the Electrical Equipment (Safety) Regulations 2016 and the Toys (Safety) Regulations 2011).
Brexit will play an important role here and there is likely to be some regulatory divergence between the European Union and the United Kingdom in the manufacturing sector. Examples include the following:
- CE marking versus UK Conformity Assessed (CA) marking: As a result of Brexit, the United Kingdom is phasing out the CE mark and introducing a UKCA Mark; and
- Conformity assessments for AI were a feature of the European Commission’s February 2020 AI White Paper; the UK government has not announced a similar intention.
(e) Agriculture
Post-Brexit agricultural policy reform provides the backdrop to AI in UK agriculture. The key legislative development here is the passing into law of the Agriculture Act 2020, in November 2020.
The act establishes a roadmap to introduce in England a replacement to the European Union’s Common Agricultural Policy, which has driven the funding of UK farms since the United Kingdom’s accession to the European Community in 1973. The replacement policy, to be phased in over the period 2021–2028, will pay farmers to produce ‘public goods’ such as environmental or animal welfare improvements. The Agriculture Act also introduces wider measures such as improving fairness in the agricultural supply chain and the operation of agricultural markets.
Separately, the UK government is funding food production initiatives as part of its industrial strategy – with a good example being the Transforming Food Production Challenge, which aims essentially to produce more food with less environmental impact.
These developments set the scene for a promising period of AI in UK agritech.
(f) Professional services
The United Kingdom’s strong professional services sector has influential regulators which are keenly aware of the opportunities that AI presents to the businesses they regulate, as well as the risks for their clients.
Taking UK solicitors as an example: the Legal Services Board (LSB) oversees the regulation of the United Kingdom’s lawyers. The LSB supervises eight ‘approved regulators’, of which the Solicitors Regulation Authority (SRA) is the primary regulator of solicitors.
At present, the SRA does not impose any AI-specific regulatory requirements on solicitors or their firms. The relevant parts of the SRA’s Standards and Regulations – its core regulatory texts – are the same seven overarching principles and parts of the SRA Codes of Conduct that apply generally.
The use of AI by solicitors is therefore subject to the SRA’s more general conduct of business-type rules, such as rules requiring solicitors’ firms to:
- manage material business risks;
- supervise work undertaken by others (including third-party contractors); and
- comply with client transparency requirements.
(g) Public sector
The sizeable buying power of the UK public sector has led to it playing a leading role in the development and implementation of practical approaches to AI ethics and governance frameworks.
Key recent publications include:
- NHSX’s A Buyer’s Guide to AI in Health and Care, published in November 2020 (NHSX is the UK National Health Service’s digital transformation unit);
- the Office for AI’s Guidelines for AI Procurement, published in June 2020; and
- the Government Digital Service and the Office for AI’s Guide to Using AI in the Public Sector, published in January 2020.
These publications are readily available online and, with some adaptation, are helpful guides for private sector enterprise.
Another important aspect of the role of the public sector is its role as custodian of the United Kingdom’s huge public datasets. Here, the Re-use of Public Sector Information Regulations 2015 (RPSI) are significant. Broadly, the RPSI are intended to encourage the reuse of public sector information, for both commercial and non-commercial purposes. AI thrives on ready access to high-quality datasets, which the RPSI aim to promote.
(h) Other
In addition to the sectors covered above, AI has important implications for other UK sectors, including the following:
- Financial services: A traditional strength of the UK economy, the depth of financial expertise in London has been a boon for UK fintech. AI has significant implications for insurance, consumer credit, compliance functions, fraud prevention and anti-money laundering (among others).
- Digital marketing: AI can facilitate targeting and predictive advertising, content creation and web search advertising. Privacy and personal data are key issues in this area.
- Education: The 2020 UK GCSE and A-Level exam grading controversy illustrates some classic algorithmic bias and transparency risks.