Comparative Guides

Welcome to Mondaq Comparative Guides - your comparative global Q&A guide.

Our Comparative Guides provide an overview of some of the key points of law and practice and allow you to compare regulatory environments and laws across multiple jurisdictions.

Start by selecting your Topic of interest below. Then choose your Regions and finally refine the exact Subjects you are seeking clarity on to view detailed analysis provided by our carefully selected internationally recognised experts.

4. Results: Answers
Artificial Intelligence
1.
Legal and enforcement framework
1.1
In broad terms, which legislative and regulatory provisions govern AI in your jurisdiction?
UK

Answer ... The starting point of the legal analysis for AI is the application of developing legal norms around software and data. “It’s only AI when you don’t know what it does; then it’s just software and data” is a useful heuristic.

As this suggests, AI law does not yet exist as a distinct field, though much of the foundational work is underway and a specific body of AI law will emerge in the coming years and decades.

For now, the key areas are as follows:

  • Data protection: The impact of data protection on AI that uses personal data is already profound, thanks in large part to the UK General Data Protection Regulation.
  • Intellectual property: Various forms of IP rights give protection to AI investments and innovations. Copyright protects the computer code in which an AI system is written. Patent protects AI inventions. Database right can protect the datasets that power AI algorithms.
  • Contract: Contracting parties agree rights and obligations in relation to AI projects.
  • Tort: Outside regulatory and statute law, the common law area of tort is most likely to see important AI-influenced developments.
  • Sector-specific regulation: Currently the approach to AI regulation in the United Kingdom is trending towards sector-specific rules, not overarching frameworks. Good examples to date include transport and healthcare.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
1.2
How is established or ‘background’ law evolving to cover AI in your jurisdiction?
UK

Answer ... English law has always evolved incrementally, adapting flexible foundational principles to new technologies as they emerge. AI will not be an exception to this rule.

However, at this early stage, there are still important questions around how the United Kingdom’s legal response to AI will be implemented in practice. Will it be left to the courts to respond to the issues on a case-by-case basis? Or will there be a need for new legislation and regulation? The answer is not yet clear, but it will probably involve both.

Currently, ‘background’ law is evolving to set the scene for the anticipated widespread adoption of AI in the future. Illustrative examples include the following:

  • The Automated and Electric Vehicles Act 2018 makes changes to the United Kingdom’s compulsory motor vehicle insurance regime to enable connected and autonomous vehicles to be insured like conventional vehicles.
  • Regulatory ‘sandboxes’, such as the Financial Conduct Authority’s (FCA) fintech sandbox, aim to provide a controlled environment to test new fintech products.
  • Sector-specific AI labs, such as the UK National Health Service’s AI Lab, combine ‘incubator’ style support with a focus on clarifying often complex regulatory frameworks, so that AI products have a clear and safe route to market.

Despite these early developments, it is important to recognise that the roadmap for the evolution of background law in the United Kingdom (as in many other jurisdictions) is still uncertain.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
1.3
Is there a general duty in your jurisdiction to take reasonable care (like the tort of negligence in the United Kingdom) when using AI?
UK

Answer ... Yes. Negligence under English law centres on the existence of a duty at common law “to be careful”. The list of situations giving rise to a duty of care in English law is famously not fixed: in the words of the House of Lords in the United Kingdom’s leading case, “the categories of negligence are never closed” and it is hard to imagine that a common law duty of care will not arise in relation to many, or most, kinds of AI.

Beyond the categories of negligence, AI will have significant effects on other aspects of negligence. A good example is the standard of care – the level of carefulness required when a duty of care is present.

Take the field of medicine as an example: in the future, in a world where an AI system’s ability to diagnose a particular disease surpasses the ability of the average doctor, how should the standard of care be measured? If it becomes the norm for doctors to use AI tools in diagnostic procedures because they are that much more accurate, will a doctor who refuses to do so and then fails to spot an issue be negligent because the reasonable thing to have done would have been to use the AI tool?

The short answer is that AI will stretch the English law of negligence in unpredictable ways as it gets more widely adopted.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
1.4
For robots and other mobile AI, is the general law (eg, in the United Kingdom, the torts of nuisance and ‘escape’ and (statutory) strict liability for animals) applicable by analogy in your jurisdiction?
UK

Answer ... Nuisance and escape (Rylands v Fletcher) liability is based on interference with the use or enjoyment of land, and so is likely to be relevant for robots, autonomous vehicles and other kinds of mobile AI in particular.

If a robot runs amok, the situation may be analogised to straying animals, where liability has been codified by statute in the Animals Act 1971, Section 4 of which imposes strict liability for straying animals. This is a possible avenue for the statutory regulation of AI in due course; but for the moment, one can easily imagine the common law being extended to treat AIs that cause unreasonable annoyance to a neighbour as nuisance in the same way as for animals.

The rule in Rylands v Fletcher is, in essence, that if someone “brings on his lands … anything likely to do mischief if it escapes [he is…] answerable for all damage which is the natural consequence of its escape”. The principle has been applied to motor vehicles and electricity, but not to an aircraft or a cricket ball driven out of the ground. Extending Rylands v Fletcher escape liability in tort to AI would therefore appear to be a relatively simple extension consistent with past decisions.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
1.5
Do any special regimes apply in specific areas?
UK

Answer ... Yes. The high-level view is that the United Kingdom is currently taking a sector-specific approach to regulating and legislating for AI, as opposed to imposing an overarching framework. This was neatly articulated in a UK House of Lords AI report in December 2020, which concluded that: “The challenge posed by the development and deployment of AI cannot currently be tackled by cross-cutting regulation.”

The dominant perspective in UK government is that the regulation of AI should be left to sector-specific regulators which are better placed than central government to identify gaps in regulation and to learn about AI and apply it to their sectors.

An obvious result of this is that incremental changes are being implemented on a sector-by-sector basis in the United Kingdom. In theory, the advantages of this approach are that:

  • it avoids heavy-handed, inappropriate regulation; and
  • it lets nimble regulators experiment in their specialist areas.

The disadvantage – likely to become increasingly apparent as the AI sector grows – is that it produces a patchwork of potentially inconsistent regulatory regimes.

We look at some of the more developed sectors in detail in our answer to question 3.1.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
1.6
Do any bilateral or multilateral instruments have relevance in the AI context?
UK

Answer ... Yes. Overarching international instruments are important – particularly as the world’s superpowers look to shape AI in ways that protect and enhance their interests. The influence of these instruments on ‘big-picture’ AI will be profound over the longer term; but it is less relevant in the ‘day to day’ for AI for now. Important examples include the following:

  • Organisation of Economic Co-operation and Development (OECD) Principles on AI: The OECD has established a non-binding, but influential series of five principles for “the responsible stewardship of trustworthy AI” set out in May 2019. In summary, the principles are:
    • inclusive growth;
    • sustainable development and wellbeing;
    • human-centred values and fairness;
    • transparency and explainability;
    • robustness, security and safety; and
    • accountability.
  • G20 AI Principles: Drawn from the OECD’s Principles, this is a series of AI principles adopted by the G20 in June 2019.
  • The Global Partnership on AI: This is an international body consisting of the G7 nations and several others aiming to support the development of AI “in a manner consistent with human rights, fundamental freedoms and our shared democratic values”, established June 2020.
  • The US/UK declaration on cooperation in AI R&D: Signed in September 2020, this is a statement broadly aimed at promoting AI R&D in ways that promote “the mutual wellbeing, prosperity, and security of present and future generations”.

While the practical impact of these instruments is unclear, their common theme is to present AI as a tool for broadly ‘democratic’ objectives. Without naming any names, there is a clear criticism – implied and explicit – of AI when used for ‘authoritarian’ and ‘repressive’ ends.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
1.7
Which bodies are responsible for enforcing the applicable laws and regulations? What powers do they have?
UK

Answer ... There is currently no specific AI regulator in the United Kingdom. Instead, there is a developing ecosystem of organisations whose regulatory powers touch AI indirectly and which are more generally involved in the development of AI. Key organisations include the following:

  • The Information Commissioner’s Office (ICO): The role of the ICO in enforcing data protection legislation in the United Kingdom makes it centrally important for AI at the moment (see also question 4.1 on the UK General Data Protection Regulation). The ICO has powers to issue fines for breaches of data protection legislation, as well as broad powers to investigate, prosecute and censure.
  • The Competition and Markets Authority (CMA): The UK competition regulator is currently focusing on the potential that AI algorithms have to reduce competition and harm consumers, publishing a significant paper on the subject in Q1 2021. This work is likely to be preparatory to the CMA taking an active role in enforcing competition and consumer protection laws as regards AI in the coming years.
  • The Centre for Data Ethics and Innovation, the AI Council and the Office for AI: These are three AI-focused bodies established in 2018 with responsibilities relating to the development of AI in the United Kingdom. Although influential, they do not have statutory powers as yet.
  • Sector-specific regulators: These include those responsible for the financial sector (the Financial Conduct Authority and the Prudential Regulatory Authority), telecoms (the Office of Communications), healthcare (the Care Quality Commission) and law (the Solicitors Regulatory Authority and the Bar Standards Board), each with its own specific powers.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
1.8
What is the general regulatory approach to AI in your jurisdiction?
UK

Answer ... There is a fairly widely held view among UK lawmakers that it is still too early to be thinking about overarching rules for AI. The risk with that is that you create a framework that ends up being distortive or even out of date by the time it is implemented.

That said, there is also a recognition that profound changes to UK law will be required to keep pace with AI.

For now, the approach is to allow sector-specific regulators which are closer to particular industry sectors to develop rules for their areas. Recent progress in connected and autonomous vehicles is an excellent example of this.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
2.
AI market
2.1
Which AI applications have become most embedded in your jurisdiction?
UK

Answer ... We are still in the foothills of AI adoption, so the better way to consider this in the United Kingdom is to think of ‘potential’ rather than ‘embeddedness’.

A unique characteristic of the United Kingdom’s AI potential lies in the UK government’s vast and comprehensive public datasets.

Perhaps the best example here is the UK National Health Service’s health data: widely believed to be the biggest and most comprehensive health dataset in the world, it tracks the health of UK citizens from birth to death. Other examples include mapping, census and meteorological data – and, for fintechs, open banking data.

These vast pools of data are the lifeblood of many UK AI companies, whose business in simple terms revolves around using algorithms to generate insights and predictions from data.

There is a strong correlation between data availability and AI potential. AI in healthcare is growing – for instance, with the National Health Service rolling out AI applications in areas such as diagnostics and radiology. Bolstered by London’s traditional strength in financial services and a proactive regulatory environment, UK fintech AI is growing in areas such as back-office process automation, compliance and algorithmic trading.

Over the longer term, access to comprehensive, high-quality data is also likely to drive the change from ‘potential’ to ‘embeddedness’.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
2.2
What AI-based products and services are primarily offered?
UK

Answer ... The first point to make here is that AI in the United Kingdom is still at a fundamentally disruptive stage. While some AI use cases are already relatively well established (internet search, voice assistants, image recognition), others are running up against current limitations in technology (driverless cars) and questions about the ethics of their adoption (facial recognition). So, there is a degree of volatility when it comes to assessing the most established forms of AI in the United Kingdom.

Looking at the question another way, the top three sectors for tech venture capital investment in London in 2020 were as follows:

  • fintech (41%);
  • enterprise software (17%); and
  • transportation (14%).

Beyond this, the health, energy, cybersecurity, food and real estate sectors all took between 5% and 7% of investment. A reasonable assumption is that current investment priorities give an indication of future rates of adoption.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
2.3
How are AI companies generally structured?
UK

Answer ... Once the founders decide to incorporate, an AI company in the United Kingdom typically starts out life as a ‘private company limited by shares,’ otherwise known as a ‘limited company’. There are numerous advantages:

  • Incorporation is a straightforward process;
  • The founders’ liability is limited;
  • Disclosure obligations are limited, particularly while the company is small;
  • It is easier to attract finance (both debt and equity); and
  • It is easier for the founders to apportion their ownership of the company.

As the company grows, its corporate structure is likely to become more complex. For instance, international expansion may bring with it local law requirements to incorporate foreign subsidiaries. The company may decide to shield important assets such as intellectual property in non-trading entities to protect them from operational risks. The company may decide to incentivise staff with equity, which could lead to the issuance of new classes of shares such as non-voting, preference and redeemable shares.

When the company takes in outside investment, its corporate structure will grow still further. Investors will have their own priorities, such as structuring their investments in a tax-efficient way and preserving their priority as creditors if the company fails. The nature of the investor will also affect this. For instance, strategic investors (eg, corporates) will have different objectives from financial investors (eg, private equity).

Later in the company’s lifecycle, other corporate actions may become relevant, such as corporate re-organisations, acquisitions, divestments and initial public offerings.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
2.4
How are AI companies generally financed?
UK

Answer ... In the United Kingdom, the financing of AI companies follows the investment patterns of the broader technology sector. Generally, investment in the technology sector is strong and has remained so despite the COVID-19 pandemic. London is the leading city for technology investment in Europe, accounting for more than 25% of all European venture capital investment in 2020.

Typically, the bigger the funding requirements of an AI company in the United Kingdom, the more international the investor base. In 2019/20, London start-ups in seed funding rounds (ie, rounds of $1 million to $4 million) obtained about 60% of funding from UK and European investors.

By contrast, London companies in later-stage ‘megaround’ funding (ie, rounds of $250 million-plus) obtained only 17% of funding from the United Kingdom and Europe, with nearly 80% coming from Asia, the United States and Canada. As the London technology start-up sector matures, ‘megaround’ funding is becoming more common.

As well as private capital, the UK government’s focus on AI means that public funding is also available for early-stage AI companies in some contexts. An AI company looking to grow in the United Kingdom would be well advised to explore these options at an early stage.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
2.5
To what extent is the state involved in the uptake and development of AI?
UK

Answer ... The UK government is supportive of AI innovation in both the UK public and private sectors. One report puts the United Kingdom second globally (behind the United States) in terms of governmental readiness to adopt AI (see the Oxford Insights AI Readiness Index 2020).

This translates into AI being front and centre of the United Kingdom’s industrial strategy. In its 2017 Industrial Strategy White Paper, the UK government identified “AI and data” as one of four “Grand Challenges” (policy focal points). In April 2018, the UK government announced a significant investment programme as part of its AI Sector Deal.

In the longer term, the UK government also aims to support AI by boosting research and development investment generally – from about 1.7% of GDP currently to 2.4% by 2027.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
3.
Sectoral perspectives
3.1
How is AI currently treated in the following sectors from a regulatory perspective in your jurisdiction and what specific legal issues are associated with each: (a) Healthcare; (b) Security and defence; (c) Autonomous vehicles; (d) Manufacturing; (e) Agriculture; (f) Professional services; (g) Public sector; and (h) Other?
UK

Answer ... The United Kingdom’s approach to regulating AI emphasises the role of sector-specific regulators as opposed to a single, overarching regulatory framework, so this question is particularly apt for the United Kingdom. In short, AI is getting a lot of attention from sectoral regulators at the moment and the situation is moving quickly.

(a) Healthcare

In healthcare, the UK regulatory framework is complex. It has been described as a “bewildering array of bodies for innovators to navigate”. Given the advantages of creating a coherent regulatory environment, however, there are efforts to coordinate and simplify to create clear pathways for AI companies to obtain regulatory approval for their AI systems.

Regulatory complexity is therefore a genuine legal issue at the moment. Given the obvious sensitivities involved in healthcare AI, other central legal issues include privacy, data ethics, transparency and accountability.

Key regulatory bodies include:

  • the Medicines and Healthcare Products Regulatory Agency, which regulates medicines and medical devices;
  • the National Institute for Health and Care Excellence, which provides high-level guidance on improving health and social care in the United Kingdom; and
  • the Care Quality Commission, which regulates the provision of health and social care services in the United Kingdom.

(b) Security and defence

A key trend in security and defence is the UK government’s increasingly cautious approach to foreign investment and national security, which has significant implications for AI as well as the broader technology sector.

Historically, the UK government has taken a permissive approach to foreign investment in UK industry. The United Kingdom has been an outlier among western jurisdictions in not having standalone foreign investment rules, such as the Committee on Foreign Investment in the United States and the European Union’s Foreign Direct Investment Regulation.

Increased geopolitical tensions in recent years – demonstrated, for example, in the decision taken in mid-2020 to ban Huawei technology from the United Kingdom’s 5G infrastructure – have led to a more cautious approach.

In November 2020, the UK government announced far-reaching proposals in the form of the National Security and Investment Bill, which marked a step change in approach. The bill, which is currently going through the UK legislative process, outlines a strict mandatory notification procedure and broad powers for government to ‘call in’ sensitive transactions involving foreign investors.

Importantly for would-be non-UK investors in UK AI, AI is specifically called out as a sector facing mandatory notification, along with autonomous robotics, cryptographic authentication and quantum technologies.

(c) Autonomous vehicles

Reforming the United Kingdom’s legal and regulatory environment to promote the development and adoption of connected and autonomous vehicles (CAVs) has been a UK government priority area for some years now.

Early progress was made with the enactment of the Automated and Electric Vehicles Act 2018, which made changes to the United Kingdom’s compulsory motor vehicle insurance regime to enable CAVs to be insured like conventional motor vehicles.

In Q4 2021, the English and Scottish Law Commissions are expected to provide the final report in their three-year review of the United Kingdom’s legal framework for CAVs. The conclusions of this review are likely to set the agenda for legal reform in this area in the years to come.

The Centre for Connected and Autonomous Vehicles – a joint unit of the UK government’s Department for Transport and Department for Business, Energy & Industrial Strategy – has a broad mandate to promote the United Kingdom’s CAV ecosystem. On the regulatory side, its early work has included simplifying the rules around CAV testing on UK roads.

(d) Manufacturing

When products are manufactured and placed on the market in the United Kingdom, they generally fall within the scope of the United Kingdom’s product safety legislation. This is true when AI is incorporated into those products.

In simple terms, the United Kingdom’s product safety legislation sets out a framework of standards and requirements products must meet, as well as rules relating to traceability, responses if a product is found not to be safe and the powers of authorities to take action.

Key legislation includes the following:

  • The General Product Safety Regulations 2005 (GPSR) apply to consumer products not otherwise addressed by sector-specific product legislation. A key feature of the GPSR is an obligation not to place a consumer product on the UK market unless that product is safe; and
  • Sector-specific product legislation applies both to consumer and non-consumer products (eg, the Electrical Equipment (Safety) Regulations 2016 and the Toys (Safety) Regulations 2011).

Brexit will play an important role here and there is likely to be some regulatory divergence between the European Union and the United Kingdom in the manufacturing sector. Examples include the following:

  • CE marking versus UK Conformity Assessed (CA) marking: As a result of Brexit, the United Kingdom is phasing out the CE mark and introducing a UKCA Mark; and
  • Conformity assessments for AI were a feature of the European Commission’s February 2020 AI White Paper; the UK government has not announced a similar intention.

(e) Agriculture

Post-Brexit agricultural policy reform provides the backdrop to AI in UK agriculture. The key legislative development here is the passing into law of the Agriculture Act 2020, in November 2020.

The act establishes a roadmap to introduce in England a replacement to the European Union’s Common Agricultural Policy, which has driven the funding of UK farms since the United Kingdom’s accession to the European Community in 1973. The replacement policy, to be phased in over the period 2021–2028, will pay farmers to produce ‘public goods’ such as environmental or animal welfare improvements. The Agriculture Act also introduces wider measures such as improving fairness in the agricultural supply chain and the operation of agricultural markets.

Separately, the UK government is funding food production initiatives as part of its industrial strategy – with a good example being the Transforming Food Production Challenge, which aims essentially to produce more food with less environmental impact.

These developments set the scene for a promising period of AI in UK agritech.

(f) Professional services

The United Kingdom’s strong professional services sector has influential regulators which are keenly aware of the opportunities that AI presents to the businesses they regulate, as well as the risks for their clients.

Taking UK solicitors as an example: the Legal Services Board (LSB) oversees the regulation of the United Kingdom’s lawyers. The LSB supervises eight ‘approved regulators’, of which the Solicitors Regulation Authority (SRA) is the primary regulator of solicitors.

At present, the SRA does not impose any AI-specific regulatory requirements on solicitors or their firms. The relevant parts of the SRA’s Standards and Regulations – its core regulatory texts – are the same seven overarching principles and parts of the SRA Codes of Conduct that apply generally.

The use of AI by solicitors is therefore subject to the SRA’s more general conduct of business-type rules, such as rules requiring solicitors’ firms to:

  • manage material business risks;
  • supervise work undertaken by others (including third-party contractors); and
  • comply with client transparency requirements.

(g) Public sector

The sizeable buying power of the UK public sector has led to it playing a leading role in the development and implementation of practical approaches to AI ethics and governance frameworks.

Key recent publications include:

  • NHSX’s A Buyer’s Guide to AI in Health and Care, published in November 2020 (NHSX is the UK National Health Service’s digital transformation unit);
  • the Office for AI’s Guidelines for AI Procurement, published in June 2020; and
  • the Government Digital Service and the Office for AI’s Guide to Using AI in the Public Sector, published in January 2020.

These publications are readily available online and, with some adaptation, are helpful guides for private sector enterprise.

Another important aspect of the role of the public sector is its role as custodian of the United Kingdom’s huge public datasets. Here, the Re-use of Public Sector Information Regulations 2015 (RPSI) are significant. Broadly, the RPSI are intended to encourage the reuse of public sector information, for both commercial and non-commercial purposes. AI thrives on ready access to high-quality datasets, which the RPSI aim to promote.

(h) Other

In addition to the sectors covered above, AI has important implications for other UK sectors, including the following:

  • Financial services: A traditional strength of the UK economy, the depth of financial expertise in London has been a boon for UK fintech. AI has significant implications for insurance, consumer credit, compliance functions, fraud prevention and anti-money laundering (among others).
  • Digital marketing: AI can facilitate targeting and predictive advertising, content creation and web search advertising. Privacy and personal data are key issues in this area.
  • Education: The 2020 UK GCSE and A-Level exam grading controversy illustrates some classic algorithmic bias and transparency risks.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
4.
Data protection and cybersecurity
4.1
What is the applicable data protection regime in your jurisdiction and what specific implications does this have for AI companies and applications?
UK

Answer ... The UK General Data Protection Regulation (GDPR) is the driving force of the United Kingdom’s data protection regime. The key point is that UK GDPR treats personal data used in AI like any other personal data. So the usual principles-based UK GDPR framework applies.

But the UK GDPR’s stricter rules are likely to bite on AI, because of the more intrusive ways in which AI uses personal data. For example, the Information Commissioner’s Office (ICO) (the UK data protection regulator) takes the view that data protection impact assessments will almost always be required when AI uses personal data.

Another example is the restrictions UK GDPR places on decision making based solely on automated processing. In practice, this means that organisations must be very careful when AI is used to do things like assessing loan applications, recruitment aptitude tests and medical assessments without a ‘human in the loop’.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
4.2
What is the applicable cybersecurity regime in your jurisdiction and what specific implications does this have for AI companies and applications?
UK

Answer ... The rules that apply depend on what the AI company or application does, the data it uses and the sectors in which it operates – there is no single overarching framework. As well as the legal and regulatory framework, AI companies should be aware of information security standards such as the ISO 27000 family, which are becoming increasingly important in the UK market.

If an AI application processes personal data, the UK GDPR will take centre stage. Here the key point is the ‘security principle’, which requires personal data to be processed securely by means of “appropriate technical and organisational measures”. The requirements are deliberately not prescriptive: in practice, the more sensitive the personal data involved, the higher the requirements will be. A good starting point here is the ICO and the National Cyber Security Centre’s security outcomes approach.

Certain operators will be subject to the Network and Information Systems (NIS) Regulations 2018. With some exceptions, the NIS Regulations apply to two categories of organisations:

  • Operators of essential services (OES): These are organisations that operate services deemed critical to the economy and wider society (eg, water, transport, energy, healthcare and digital infrastructure); and
  • Digital service providers (DSPs): These include search engines, online marketplaces and cloud computing services.

At a high level, the NIS Regulations impose obligations on OESs (stricter) and DSPs (less strict) to keep their networks and information secure.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
5.
Competition
5.1
What specific challenges or concerns does the development and uptake of AI present from a competition perspective? How are these being addressed?
UK

Answer ... The challenges and concerns are that powerful AI tools can harm competition and consumers in ways which existing competition and consumer protection laws cannot protect against.

There is growing regulatory scrutiny on this point in the United Kingdom. The Competition and Markets Authority (CMA), the competition regulator, published a paper on algorithms in January 2021 setting out its key concerns, which fall into two categories. The first concerns the direct harms AI algorithms can cause to consumers, such as the following:

  • Algorithms can tailor pricing for different consumers in ways that are difficult for the consumer to detect or understand;
  • Algorithms can manipulate consumers’ ‘choice architecture’ in ways that can have negative consequences, particularly when used at scale;
  • Algorithmic decision making can be discriminatory; and
  • Algorithmic ranking, for instance in search, can be used unfairly to promote/demote certain goods or services for commercial advantage.

The second concerns the use of algorithms which results in exclusionary practices and collusion between pricing algorithms. Examples include:

  • self-preferencing;
  • manipulating ranking algorithms to exclude competitors; and
  • changing an algorithmic system in a gateway service that harms businesses that rely on it.

While the CMA’s report signals a growing interest in the competition and consumer protection aspects of AI in the United Kingdom, the appetite and ability of UK regulators to look ‘under the bonnet’ of complex AI technologies remain to be seen.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
6.
Employment
6.1
What specific challenges or concerns does the development and uptake of AI present from an employment perspective? How are these being addressed?
UK

Answer ... Structural changes as AI automates jobs done by humans are likely to have far-reaching consequences over the longer term: estimates suggest that by the mid-2030s, 30% of UK jobs will be at a potentially high risk of automation.

With a narrower employment law focus, the key issues will be around workers’ rights, bias and discrimination. As HR departments adopt AI applications in, for example, recruitment, resource management and performance management, the effects will become more significant.

The UK Institute for the Future of Work’s October 2020 report into algorithmic accountability, An Accountability for Algorithms Act, provides an interesting outline into the issues and the potential regulatory responses here.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
7.
Data manipulation and integrity
7.1
What specific challenges or concerns does the development and uptake of AI present with regard to data manipulation and integrity? How are they being addressed?
UK

Answer ... UK regulators – and in particular the Information Commissioner’s Office (ICO), the UK data protection regulator – have been active in flagging the novel security risks that AI presents, which include risks associated with data manipulation and integrity.

The ICO’s Guidance on AI and Data Protection from mid-2020 gives several examples. These focus on security of personal data in AI, but they also apply generally – they include the following:

  • Model inversion attacks: Where an attacker with access to some of an AI model’s training dataset can infer other aspects of that training dataset by observing the inputs and outputs of the AI model;
  • Membership inference attacks: Where an attacker can infer whether an individual formed part of a training dataset. When presented with data about an individual that an AI model had ‘seen before’, the AI model would output disproportionately confident predictions about that individual; and
  • Adversarial examples in a training dataset: Where, for example, an image in an AI dataset is deliberately misclassified, such as an image of a chisel being labelled as a toothbrush.

Existing data protection and information security rules avoid imposing prescriptive ‘one size fits all’ requirements, deliberately putting the onus on AI companies to determine an appropriate security response. Regulatory guidance – such as that from the ICO – and the work of international standards bodies such as the International Organization for Standardization will give AI companies helpful practical guidance in this area.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
8.
AI best practice
8.1
There is currently a surfeit of ‘best practice’ guidance on AI at the national and international level. As a practical matter, are there one or more particular AI best practice approaches that are widely adopted in your jurisdiction? If so, what are they?
UK

Answer ... The UK government, as custodian of the United Kingdom’s largest datasets, is also at the forefront of AI best practice in the United Kingdom. The Government Digital Service’s Data Ethics Framework (now in its third edition, updated in September 2020) is an excellent example of best practice for implementing AI in an organisation, albeit with a focus on public sector projects.

We regularly suggest to our clients that this framework, with some adaptation for commercial considerations, could be used by private sector organisations as a starting point for the policy and process elements of their own data ethics and governance initiatives.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
8.2
What are the top seven things that well-crafted AI best practices should address in your jurisdiction?
UK

Answer ... There is no ‘one size fits all’ approach here, and the needs and objectives of the organisation and its customers are always fundamentally important. But, taking the example of the UK government’s Data Ethics Framework referred to in question 8.1, we would recommend the following:

  • From the outset, define high-level principles (eg, transparency, accountability, fairness).
  • Be able to articulate the purpose of the AI and the customer need it satisfies.
  • Ensure that diverse, multi-disciplinary teams are involved in the AI project.
  • Understand and comply with the legal framework that applies to the AI.
  • Understand the quality and limitations of the data that the AI uses.
  • Set out a process for continuous evaluation over the life of the AI.
  • Define outcomes or indicative behaviours that can be used to test compliance with the above factors in an objective and measurable way.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
8.3
As AI becomes ubiquitous, what are your top tips to ensure that AI best practice is practical, manageable, proportionate and followed in the organisation?
UK

Answer ... Communication. Time and again, we see good communication driving the effective development and roll-out of AI best practices in the organisation.

This boils down to:

  • buy-in from leadership on key principles and objectives;
  • engagement of relevant stakeholders in the development of frameworks and policies; and
  • consistency in application to embed processes and procedures across the organisation.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
9.
Other legal issues
9.1
What risks does the use of AI present from a contractual perspective? How can these be mitigated?
UK

Answer ... A key risk for AI is that, in the excitement and optimism at the start of a deal, the parties do not invest enough effort in understanding and documenting what they are buying (customer), or what the customer wants and is expecting (supplier). This is always a risk for new technologies and it comes up for AI a lot at the moment.

Contracts for AI services are becoming a classic example. In a contract for services, there is usually a generic obligation on the supplier to “perform the services with reasonable skill and care”, where ‘the services’ are defined by reference to a schedule at the back of the contract. But for a new, cutting-edge AI product, will it ever be clear when the ‘reasonable skill and care’ standard has been met or breached?

The solution to avoid uncertainty is clear drafting which sets out objective and measurable outcomes and clear processes to achieving them. This requires the parties to invest the time and attention upfront before the contract is signed.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
9.2
What risks does the use of AI present from a liability perspective? How can these be mitigated?
UK

Answer ... The characteristics of AI create a novel risk profile, but the key liabilities are similar to those that arise with any new technology:

  • Reputational risk: This is exacerbated by volatile public opinion regarding the risks of AI, particularly when used at scale. An example is the UK school exam regulator’s use of a wayward grading algorithm in Summer 2020.
  • Risks associated with complexity: A complex AI product can involve many different parties and can integrate with multiple other IT systems. As with any complex project, this creates additional project and operational risk.
  • Personal data and information security risk: This is particularly relevant when large datasets containing personal data and other sensitive datasets are used.
  • Risk associated with regulatory change: This is particularly relevant on complex international projects. For instance, the Court of Justice of the European Union’s decision in Schrems II to invalidate the EU-US Privacy Shield took operators transferring personal data from the European Union to the United States by surprise in mid-2020.

Then you have risks that are common to all types of commercial endeavour. In the simplest terms, these include the customer’s desire to have its project completed on time, within budget and to standard, versus the supplier’s desire not to over-commit in a way that damages the economic viability of the project.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
9.3
What risks does the use of AI present with regard to potential bias and discrimination? How can these be mitigated?
UK

Answer ... The risk here is that the buyer of the AI gets a system which is at best inaccurate and at worst unfair and discriminatory. In the United Kingdom, there have been a number of high-profile examples of biased algorithms leading to unfair outcomes recently and, among other things, these have caused significant reputational damage to the parties involved.

At a general level, implementing effective best practices across the organisation will help to mitigate this risk (see question 8 for more on this).

At the contractual level, it is also worth considering the following:

  • Training data: Where has it come from? What assurances have been given about it?
  • Training the algorithm: Who is responsible for training? How many training runs will be performed? What are the testing requirements?
  • Errors and bug fixes: Which party is responsible for detecting and resolving erroneous outcomes?
  • Record keeping: Is data logging required? Will this help the parties to understand and explain why a decision was taken? Or is the algorithm a black box?

For more information about this answer please contact: Chris Kemp from Kemp IT Law
10.
Innovation
10.1
How is innovation in the AI space protected in your jurisdiction?
UK

Answer ... As discussed in question 1, a useful demystifying heuristic for AI is: “It’s only AI when you don’t know what it does; then it’s just software and data.” This also helps to illustrate how AI innovations are protected in the United Kingdom – through IP rights, in software and in relation to data, among other things.

The key IP rights relevant for AI innovations are as follows:

  • Copyright: The software code in which an AI system is written is protected by copyright, enabling the creators of AI software to be paid for their work and to control how others can use it.
  • Database right: The United Kingdom also provides a right in databases, allowing for the protection of a database in which a substantial investment has been made in obtaining, verifying or presenting its contents.
  • Confidential information and trade secrets: AI algorithms and software are most likely to be confidential to and trade secrets of the developer, and secrecy provides another and increasingly important string to the bow of IP protection.
  • Patents: These protect AI inventions.

As well as benefiting from their protection, AI will provide a significant impulse to the development of these IP rights, particularly as dynamic AI algorithms start to enable computers to generate new works (in terms of copyright), and invent and discover novel ways of doing things (in terms of patent law).

For more information about this answer please contact: Chris Kemp from Kemp IT Law
10.2
How is innovation in the AI space incentivised in your jurisdiction?
UK

Answer ... AI is a key feature of the UK government’s industrial strategy. This naturally brings with it a range of pro-innovation policy initiatives, including funding, skills and training, and a reluctance to stifle AI companies by imposing strict rules too quickly.

In legal terms – and following on from question 10.1 – the UK government is looking for ways to adapt the UK IP regime to suit the novel aspects of AI. The goal is to encourage invention and innovation in UK AI by providing a supportive framework of IP rules.

A good example of this is the UK Intellectual Property Office’s AI and IP consultation in late 2020. The consultation sought views on potential reforms to the UK’s IP rules for the AI era, including the following:

  • Copyright: The consultation is exploring whether existing copyright rules need to be changed to make it easier for AI to use protected content, and asking whether content generated by AI should be eligible for copyright protection.
  • Trade secrets: The consultation recognised the potential beneficial impact of trade secret protection where formal IP protection is not available, even if trade secret protection does not confer exclusive rights on the holder. The consultation asked whether UK trade secret law gives adequate protection to AI where no other IP rights are available and whether trade secrets can cause problems for the ethical oversight of AI inventions.
  • Patents: Should patent law recognise AI as an inventor in a patent? Who is liable if an AI infringes a patent?

For more information about this answer please contact: Chris Kemp from Kemp IT Law
11.
Talent acquisition
11.1
What is the applicable employment regime in your jurisdiction and what specific implications does this have for AI companies?
UK

Answer ... The UK General Data Protection Regulation (GDPR) and the Data Protection Act 2018 (DPA) provide a useful lens through which to understand the relationship between AI and employment law. This is because when AI applications are used in the workplace, they typically involve the processing of personal data about employees.

There are many examples of this in AI applications in use in the workplace today – for instance, aptitude testing software used in recruitment or AI applications used to monitor employee engagement and productivity.

Specific implications for AI companies (and companies using AI) include the following:

  • The implications of processing ‘special category’ data: The UK GDPR contains a stricter regime for the processing of sensitive types of personal data, which include many of the types of data that employers commonly process about their employees. Examples include data about health and data revealing racial or ethnic origin and trade union membership.
  • Data protection impact assessments (DPIAs): Employers may be required to carry out a DPIA in respect of processing activities involving employee data. The Information Commissioner’s Office (ICO) takes a strict position here, stating that where AI involves the processing of personal data, a DPIA will be required “in the vast majority of cases”.
  • Automated decision making: Data subjects (including employees) have a right not to be subject to decisions based solely on automated processing, including profiling, which produce legal effects concerning (or which otherwise significantly affect) them. In practice, this amounts to a significant restriction on AI-driven decision making where there is no ‘human in the loop’.

The ICO’s Employment Practices Code – issued before the implementation of the UK GDPR and the DPA, but still useful background – will be relevant for AI companies considering the relationship between employment law and their products.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
11.2
How can AI companies attract specialist talent from overseas where necessary?
UK

Answer ... Making non-UK talent aware of the United Kingdom’s tech-focused visa programme – the Tech Nation Visa – and then supporting through the process is a good place for AI companies to start when it comes to attracting the best talent from overseas.

The Tech Nation Visa – formally the Global Talent Visa for Digital Technology – is aimed at founders and employees with technical or business backgrounds in AI, as well as other tech sub-sectors, such as fintech, cyber and games.

It is valid for five years and allows visa holders to work, switch employers and be self-employed without needing further authorisation. Visa holders can extend the visa to immediate family members and, at the end of the initial five-year period, can apply for an extension or for permanent settlement in the United Kingdom.

The visa is pitched at two levels of candidate:

  • those with ‘exceptional talent’ – senior hires with a proven track record and recognition as a leading talent in the digital technology sector; and
  • those with ‘exceptional promise’ – junior candidates recognised as having the potential to be leading talents in future.

The application process has thorough documentary and evidentiary requirements which are likely to be daunting to potential non-UK hires, so the ability to provide HR and legal support would be helpful.

Equally, AI companies can seek to take advantage of the ‘fast-track’ options. These reduce application processing time to around three weeks and are available if applicants meet certain criteria, including:

  • an intention to work outside London;
  • membership of a C-suite; and
  • acceptance on to a recognised UK tech accelerator programme.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
12.
Trends and predictions
12.1
How would you describe the current AI landscape and prevailing trends in your jurisdiction? Are any new developments anticipated in the next 12 months, including any proposed legislative reforms?
UK

Answer ... AI has received significant attention in the United Kingdom in recent years – it is:

  • a core component of UK government industrial strategy;
  • the recipient of significant public and private investment; and
  • a focal point of legal and regulatory development at a number of levels.

Yet – rightly or wrongly – the United Kingdom is often seen as lacking a clear long-term ‘vision’ for AI. There are two main reasons for this. First, in preferring incremental regulatory change at the sector level to ambitious regulatory frameworks, the United Kingdom’s approach has garnered less attention than that of the European Union, its key comparator. Second, since 2016, Brexit has been the dominant issue, though over time its significance will fade.

We characterise the UK AI landscape as pro-investment and innovation in enterprise terms, and measured and incremental in legal/regulatory terms. We do not expect to see any fundamental changes to this approach over the next 12 months.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
13.
Tips and traps
13.1
What are your top tips for AI companies seeking to enter your jurisdiction and what potential sticking points would you highlight?
UK

Answer ... Our top three points to bear in mind are as follows:

  • UK General Data Protection Regulation (GDPR): This is particularly important if your AI application involves personal data. In our experience, tech companies entering the UK (or EU) market for the first time are struck by the depth and reach of the UK GDPR. Its principles-led approach can have significant implications for the way that companies run their operations and can affect multiple parts of the business. Data transfers are often a source of consternation, particularly when a non-UK or EU company is accustomed to hosting company data in a ‘third country’.
  • Regulation at the sector level: There is little indication that the UK government favours overarching AI regulation at the moment. This is an early example of possible longer-term regulatory divergence with the European Union. However, sector-level regulators in key areas (eg, transport, healthcare and financial services – see question 3) are active. New entrants to the United Kingdom would be well advised to understand the current approach to regulation in their sector, as well as the direction of travel.
  • Direction of travel over the longer term: The UK government’s approach to AI is characterised by pragmatism and openness to innovation at the moment. But important questions have not yet been answered. Key among them include the approach that the United Kingdom will take to AI over the longer term. Early indications suggest distance from the European Union’s recent pro-regulatory enthusiasm and a greater alignment with a laissez faire US approach. The implications of this as yet unanswered question will be profound for UK AI.

For more information about this answer please contact: Chris Kemp from Kemp IT Law
Contributors
Topic
Artificial Intelligence
Article Author(s)
UK