With advancements in technology, some hope that Artificial Intelligence (AI) will be able to do just as good a job as a human doctor. Can it? What happens when things go wrong? Who is liable? Can a patient bring a claim? Is the law as it stands ready for such claims? In this article we'll look at the answers to those questions.

What is Artificial Intelligence (AI)?

AI describes machines developing the ability to mimic things normally done only by the human brain. That can include learning, analysing data and problem-solving. It could include clinical decision-making. We are already seeing the rapid advance of AI, driven by more powerful computers able to process vast amounts of data. AI grabbed the headlines when in 2015 a machine learning programme, AlphaGo, defeated a professional Go player. Can it help improve our healthcare?

Artificial Intelligence (AI) and Healthcare

AI will probably play an increasingly important role in healthcare. Many expect it eventually to be able to screen for disease, make accurate diagnoses and recommend the treatment, whilst avoiding some of the mistakes fallible humans make. This could improve healthcare, cut costs and free resources for use elsewhere in the health service. By reducing errors, it could improve outcomes for patients and avoid the enormous costs caused by medical mistakes. So the government is keen to see AI in healthcare. In 2019 it announced £250m of funding for a new NHS AI lab.

There are a number of areas where we are likely to see the use of AI in the near future. One is interpreting radiology images. AI uses 'deep learning', in which machines scan labelled images and, guided by algorithms, 'learn' to recognise features. They use this learning to recognise similar features in other images. There are early promising signs that this could help diagnose cancer and other conditions.

In time machines may outperform humans. However, they have not reached that stage yet. In a 2019 study, deep learning systems and skilled specialists performed similarly in certain tasks. AI detected disease 87% of the time whereas the figure for humans was 86%. AI correctly ruled out disease in 93% of cases, compared with 91% in humans. At first sight the comparison looks favourable but the results are misleading. In this study the doctors were not given the additional information they would have had in real life.

However, although there have been a lot of research looking at the effectiveness of AI, there is as yet little good quality data. In the words of one expert, the field is 'awash' with poor research. A 2019 paper published in Lancet Digital Health considered 20,000 studies. It concludes that only 14 reported good data.

Successful AI requires good data, the right analytics and therefore reliable algorithms. With all that, there is great potential to improve healthcare. But there are risks. If, say, the algorithms were wrong, the use of AI could multiply errors.

Artificial Intelligence (AI), Healthcare and the Law

What happens if things go wrong and patients suffer harm? The use of AI in healthcare raises a number of difficult questions for the law. The rest of this article will consider those questions. It is too early, however, to predict what the answers will be.

As it stands clinical negligence law is ill-fitted for a world where diagnoses and treatment decisions are made by AI. We can expect to see an evolution of medical practice over the next 5 to 10 years, followed with a lag by an evolution in the law as it tackles some of the issues case by case.

Who has a duty to the patient? Who is to blame if things go wrong?

A doctor treating a patient has a legal duty to provide reasonable care. Where care falls below that a reasonable standard and causes harm, the patient can bring a claim for damages. So if a doctor misuses equipment – for instance negligently misreads a scan – the doctor is to blame if the error causes harm. When care is provided by an  NHS hospital, the Trust running the hospital is also held responsible. The law regards the Trust as 'vicariously' liable for the acts of its staff. Vicarious liability is a legal concept created to ensure that people injured by negligence can recover damages from an organisation or person overseeing someone else's actions, and one which has the means to pay.

Another legal concept designed to help victims of negligence is the non-delegable duty. The law here is complex and has been developing recently. It recognises the fragmented nature of our health service where hospitals often delegate tasks – investigations, scans and even surgery – to private providers. However, whilst they can delegate certain tasks, they remain responsible for the care patients receive.

Should hospitals be vicariously liable when AI mistakes cause harm and should they have to consider whether their duties to patients can be delegated or not? Or do we need to find a different legal framework?

Consider the use of AI to interpret an x-ray. It plays a role more akin to the doctor than to a piece of equipment. Would it make sense to make a doctor or the hospital liable to failures by AI? Should it instead be the software manufacturer, the hardware manufacturer, the person inputting data or the algorithm designer?

There is also what is known as the 'black box' problem. In computing, a black box is a program or system whose output is clear but whose processes are opaque. The program is so complex that people using the system are unlikely to know how it has reached its conclusions (just as I am writing this on my laptop but have no idea how it works). If doctors do not know how AI determined that an abnormality on an x-ray is a malignant tumour, should they be held accountable if it is wrong?

So the use of AI in effect to make decisions raises questions for the law as to who should have a duty to the patient.

What standard of care is required?

Another issue for the law is what standard of care should legally be required when AI is used.  What should the test be of whether harmful outcomes should give rise to damages?

The law currently assumes that decisions are made by doctors and not computers. They are responsible for diagnosis and treatment. Things go wrong because human beings are fallible. They act on limited information, their thinking is coloured by cognitive bias, they have to make judgement calls and they will inevitably make errors from time to time. The courts understand all this. They make a distinction between decisions which, although wrong, are reasonable and those which no reasonable doctor should make. The former are excusable. The latter are adjudged 'negligent'. The legal test making this distinction is known as the Bolam test. Decisions based on AI do not fit into this framework.

The Bolam test also assumes that there is more than one reasonable answer to medical questions. Therefore, practice may differ between responsible doctors holding different views. Where that happens, the courts do not adjudicate between different approaches provided they withstand logical analysis.

But if AI has the potential to make better decisions than human doctors, based on access to a large amount of data and freed from cognitive bias, should the law not exact a higher standard? The Bolam test will make no sense in this context. If AI is both available and proves to be more reliable than other means of making decisions, it will be hard to justify not using it.

What happens if doctors disagree with the conclusions of AI? AI may embed bias in its data or its algorithms. At present they may be assisted by guidelines from NICE and professional bodies but are still expected to exercise their professional judgment. There may be circumstances where it is appropriate to not to follow the guidelines. Will the same be true in relation to AI? We cannot assume it will necessarily always be right. A doctor may think a treatment proposed by AI is too risky or unduly burdensome for the patient. Will doctors be permitted – or even expected – to exercise their judgment in the face of a contrary conclusion by AI?

Will there be a duty to use AI?

If AI becomes the gold standard, will there be a duty to use it?

The answer may depend on a number of factors.

First, there is the reliability of AI. As the recent research suggests we have not yet reached the point where AI outperforms humans but we may not be far away.

Secondly, there is the issue of the extent to which AI becomes standard for certain practices in our hospitals. There will come a time when the results are such that no responsible doctor would fail to use it. In other words, not using AI will fail the Bolam test. At some point regulators such as the GMC will play a role here and professional bodies are likely to give guidance on the use of AI.

Thirdly, there is an issue of cost. AI will not be cheap – although it has the potential to produce substantial savings. (It may for instance be able to identify people at risk from particular cancers, enable effective screening, identify conditions earlier and lead to be better treatment decisions. All this could cut costs to our health service, keep people at work and reduce the costs of social care.) The courts accept already that the quality of care hospitals can provide is constrained by cost. They do not criticise care where the resources are simply not there (although judicial review provides a means to scrutinise the lawfulness of decisions by public bodies, for instance how they allocate their resources).

What will claims look like?

Claims arising from mistakes by AI are likely to look very different from current clinical negligence claims. The errors we currently investigate tend to concern human medical judgements or medical accidents. Mistakes in the use of AI are more likely to involve the use of and quality of data, software programmes, algorithms and configuration of hardware. Investigating these issues will require very different skills from both lawyers and evidence from technology rather than medical of experts.  We will probably need to enlist the support of our technology law colleagues and find ourselves appearing in the Technology and Construction Court.

What about the data?

Machine learning requires the use of vast amounts of data. In the medical field that data is highly sensitive. The NHS Constitution for England pledges 'to anonymize the information collected during the course of your treatment and use it to support research and improve care for others'. However, maintaining data privacy and confidentiality will be challenging, particularly given the sheer scale of datasets machine learning requires.

The choice of data collected is not neutral: it may reflex human bias. There are gaps in knowledge as a result of data being collected in some areas and not others. As Caroline Criado Perez points out1, our data is heavily skewed towards the male body, which means that the use of AI could make diagnosis for women worse rather than better.

Finally, the way data is used – the algorithms managing it – may prove to be unreliable. The fact that algorithms are often protected by proprietary rights restricts our ability to examine the way they operate, to understand their shortcomings and to identify gaps in our data.

So the nature of the data used and how it is used raise a number of issues.

Conclusions

We will inevitably see advances in the use of AI in coming years and AI may well change the shape of medical care and decision-making. At the same time, it throws up questions in the legal field. Our current legal framework assumes decision-making by human doctors and allows for the fact that to err is human. The Bolam test assumes that doctors exercising their judgement will, with good reason, not all necessarily agree on the best way forward. It also assumes that doctors, and not technology, are the decision-makers. That framework will need reshaping for a world in which decisions are shaped by AI, and by the data programmers, algorithm and software designers behind it.

Footnote

1 Invisible Women: Exposing Data Bias in A World Designed For Men, 2019

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.