- within Technology topic(s)
- within Technology, Transport and Employment and HR topic(s)
- with Inhouse Counsel
The UK Jurisdiction Taskforce (UKJT) has published a draft legal statement on liability for AI harms under the private law of England and Wales. The statement seeks to address in what circumstances, and on what legal bases, English common law will impose liability for loss that results from the use of AI (defined in the paper as "technology that is autonomous").
This statement joins a raft of guidance on AI, ranging from the Law Commission's discussion paper on AI and the law to more practical guidance such as the International Legal Technology Association's guide to Generative AI in legal disclosure and the Law Society's recently updated guide to the essentials of AI.
The role of UKJT statements is to explain how the common law is likely to deal with private law problems thrown up by new technology, in order to help increase legal certainty in a rapidly changing technological landscape. The UKJT's role is not to propose policy-based law reform.
The core focus of the statement is on the application of the law of negligence to physical and economic harms caused by AI. In this context, the statement considers the test for liability in negligence and its application to AI, including matters of factual and legal causation. In addition, the statement addresses the following questions:
- Does the principle of vicarious liability apply to loss caused by AI?
- In what circumstances can a professional be liable for using (or failing to use) AI in the provision of their services?
- Can a person be liable for harms caused by use of AI where there is no fault on their part?
- Does liability attach to false statements made by an AI chatbot?
A more detailed summary of the position taken by the UKJT in the draft statement follows below.
The UKJT is seeking input in particular on: whether the statement sufficiently addresses key issues of uncertainty; whether the areas addressed are appropriate and useful; and whether the definition of AI used is appropriate.
The consultation is open until 13 February 2026. Comments can be submitted to ukjt@justice.gov.uk.
Starting principles
The UKJT's analysis starts from the premise that AI does not have legal personality under English law and therefore cannot itself be held legally responsible for physical or economic harm. Liability for harms that arise from the use of AI must therefore be attributed to legal persons, using ordinary legal principles.
The UKJT also acknowledges that in many commercial situations the most important framework governing liability for loss caused by the use of AI will be the chain of contracts to which the various actors involved in the supply and use of AI have agreed. It takes the view that cases that require a different, negligence-based, analysis may therefore be rare in practice.
Defining AI
The UKJT has adopted a definition of AI as "a technology that is autonomous".
For these purposes, "autonomous" is defined as meaning an entity that can generate outputs which have not been determined or programmed in advance. The UKJT further elaborates this characteristic as: (i) an unpredictable relationship between input and output; (ii) an opacity in the reasoning process between input and output; and (iii) a limited ability on the part of a human user to control output.
This definition is intended to be technology-agnostic and to capture characteristics of AI that are unique and therefore responsible for much of the perceived legal uncertainty around AI.
The UKJT notes that the law has never needed to address the capability of autonomy other than in humans. However in many scenarios where AI is deployed there is no difficulty in predicting the outcome under current laws. In addition, although there is a lack of precedent for liability arising from AI, the common law is flexible and may therefore be capable of answering new questions raised.
Does the principle of vicarious liability apply to loss caused by AI?
Vicarious liability involves a person being liable for the torts of another person. The UKJT's position is that, as AI does not have legal personality, this means that the principle of vicarious liability does not arise in the context of actions or failures of AI. However, where harm is caused by the negligent use of AI by an employee in the course of their employment, then their employer could be held liable for the acts or omissions of that employee. (Note that the possibility of granting legal personality to some AI systems was posited in the Law Commission's discussion paper referred to above, but that is not specifically referred to in the UKJT's statement.)
Liability for physical vs economic harm
The UKJT notes that AI is capable of causing both physical harm, for example when embodied in an autonomous vehicle, or economic harm, such as by incorrectly predicting the price of a commodity leading a company to make a loss-making trade. Where there is no contractual liability, negligence may provide an alternative route.
In the case of economic harm or loss, a special relationship between the parties involved is required, with one party having assumed a responsibility to the other. Examples given by the UKJT of important categories of case where the use of AI may give rise to economic harm include where a professional uses AI or where an AI chatbot or equivalent produces false statements.
Negligence liability for physical harm
The test for liability in negligence requires: the presence of a duty of care; a relevant standard of care; a failure to meet that standard; that the negligence caused the loss alleged; and that the loss was sufficiently foreseeable such that the law permits recovery. In many situations, the existence of a duty of care is well established. In a novel situation, the courts look to precedent and analogies with existing law.
The UKJT takes the view that, when it comes to establishing a duty of care, in many cases AI will simply be considered a tool in the hands of those who exercise relevant control over it and can be said to have been responsible for its actions. It will not therefore change the underlying position as to whether or not a duty of care exists. How far up and down the AI supply chain a duty of care may extend will be highly fact-sensitive. Depending on the circumstances, any of a range of actors may be found to owe a duty of care.
The question of whether the standard of care has been met is also highly fact-specific. Expert evidence and industry guidance, as this develops, will be key. The enquiry will need to examine not only the AI that was deployed, but also whether reasonable steps were taken to select a model and to decide whether it was reasonable to use AI at all. Particular challenges include:
- Where a software's functioning has developed through autonomous processes, which may be opaque to its own developers. In such a case, it may be difficult to assess whether there has been reasonable care in the programming.
- AI can find unexpected ways to achieve a specified objective. Where this causes harm, there may be questions as to whether the manner of failure would have been difficult for a reasonable programmer to predict.
The UKJT concludes that whether and when a person involved in the development, supply or deployment of AI might be liable in negligence for physical harms is highly fact-sensitive. However in many cases there is no difficulty in applying existing legal principles in an AI context, extending precedent by analogy to the extent required.
Professional liability for using or failing to use AI in the provision of services
Professional liability arises where a professional fails to perform the obligations they owe to their client or a third party with reasonable skill and care. The UKJT takes the view that this principle applies to the use of AI by a professional to the same extent as anything else a professional does or fails to do. If a professional acts negligently in relation to AI use, the professional can expect to be held liable.
In most cases, the UKJT notes that the scope and nature of the duty owed will be contained in a contract. This will usually include an express or implied term that the professional should carry out the services owed with reasonable skill and care. (A duty to exercise reasonable care and skill is generally also owed at common law, but not if such duty is inconsistent with the terms of the contract.) A professional will be found to have acted with reasonable care and skill if they acted in a way that a reasonable body of the profession would also have acted. What constitutes reasonable skill and care changes over time. Therefore this standard will reflect developments in AI and AI use in a particular profession. As noted above, expert evidence and professional regulations and guidance are likely to assist in establishing the standard required.
Although the question of what reasonable care and skill entails will be specific to the profession, task and situation, the UKJT makes the following general observations:
- Failure to conduct proper due diligence on an AI system before using it for client work is likely to constitute a breach of duty, particularly if the system is new, innovative or from an untested provider.
- A professional choosing to use AI should have a sufficient understanding of it and be able to explain to their client (at least in broad terms) how it will work.
- It may well be important to be transparent with clients about the use of AI (but transparency will not of itself absolve a professional who falls short from liability).
- Putting confidential or privileged information into an AI system which is not suitably secure is likely to be a breach of duty.
- A professional should ensure that there has been sufficient testing of the AI system to check it is suitable and appropriate for the task envisaged.
- Any professional who fails to exercise oversight of the AI system's outputs is likely to be found negligent. What such oversight should entail will depend on the context.
The UKJT notes that it is easy to say that a professional must undertake due diligence and/or monitor outputs of the AI system in order to avoid a finding of professional negligence, but it may be more difficult to actually do so, in part because of the autonomous nature of AI and the difficulty that can therefore arise in predicting its outputs or explaining them after the event. The UKJT suggest this difficulty would be anticipated and in some cases lead the courts to conclude in some cases that there is a limit to what professionals can reasonably be expected to do in terms of due diligence and monitoring, and in other cases to conclude that they should not have used AI at all. It will depend on the context.
The UKJT notes that a professional can also be liable for a failure to use AI. This will be judged according to whether a reasonable professional of the same rank/specialism should have used AI in that context, and again this will turn on expert evidence and professional regulations and guidance. AI is a tool. The question of whether it should be used, and if so how, is no different for AI than for any other tool available to a professional.
Liability for harm in the absence of fault
In most cases, the UKJT notes that liability in the absence of fault will turn on the contracts involved. These may include terms such as a warranty that an AI system will meet or exceed a certain accuracy threshold. If the system fails to meet that threshold, then in the UKJT's view there is a breach of contract on the part of the developer regardless of whether they exercised care and skill. Otherwise, the general position in English law is that, absent negligence, the risk of loss for non-deliberate harm lies where it falls, and there is no liability where there is no negligence.
The exception is where the AI system is incorporated into a tangible product, in which case the Consumer Protection Act 1987 applies. The Act imposes strict liability for harms where a product is shown to be defective. However, its scope is narrow and the UKJT takes the position that it is unlikely to apply in its current form to a standalone AI system (ie one which is not incorporated into a tangible product for the purposes of the Act). In addition, the Act only gives rise to liability for death, personal injury or loss or damage to property, and in practice claims for property damage can only be brought by consumers. The relevance of the Act may therefore be limited in the commercial context. The Law Commission has announced that it intends to review the law relating to product liability set out in the Act and anticipates that this review will address the status of "pure software". A public consultation on proposals for reform is currently planned for the second half of 2026.
Factual causation
For a party to be liable, their breach must have caused the loss in question. The autonomous nature of AI may make it difficult to understand why a particular outcome occurred. The UKJT takes the view that the general rules of causation can in principle be applied to AI harms, and the evidential difficulties to which AI may give rise are not particularly exceptional and can be accommodated by English law. Indeed, in many cases involving harms caused by the use of AI, neither its autonomous nature nor any other characteristic of the technology will have any causal relevance.
Where it is necessary to understand why AI produced the output it did, the UKJT identifies two kinds of causal uncertainty which may arise. The first is where a gap in the evidence arises because material has been destroyed, tampered with or not gathered (evidential uncertainty). The second is where the uncertainty is owing to the opacity of AI itself (scientific uncertainty):
- Evidential uncertainty: The precise reasons why AI produced a particular outcome may not be recoverable after the decision is made. If it is alleged that incorrect inputs were provided to the AI, the training was inadequate or guardrails were incorrectly set, lack of evidence may make it difficult to know how the AI would have behaved with correct inputs and adequate training. Where there are difficulties with evidence, the UKJT posits that the law may recognise this and approach questions of factual causation differently (for example by making evidential assumptions). Expert evidence may also be able to fill the gaps.
- Scientific uncertainty: Where there is a systematic impossibility of proving causation, English law has shown itself to be capable of evolving or developing limited exceptions. For example, in cases involving asbestos where it is scientifically impossible to show on which of several occasions a claimant might have been exposed to the asbestos that caused them harm, the courts have applied a principle of "material increase in risk". It may be possible to extend such principles beyond personal injury. Regardless, these cases show that the law is able to develop to ensure that systemic problems of causation do not cause systemic injustice.
Legal causation
Legal causation is a question of whether someone or something has intervened between a person's wrongful conduct and the loss at issue which breaks the chain of causation. The UKJT consider two scenarios which might give rise to questions of legal causation:
- Foreseeable misuse of an AI system: Where an AI system has been used deliberately by a bad actor to cause harm, that actor may be unidentifiable or not amenable to justice. The question that then arises is whether parties in the AI supply chain have a legal duty to prevent such misuse. In most circumstances, according to the UKJT, it is very unlikely that a party upstream in the AI supply chain will be held responsible for a third party's deliberate misuse of an AI system, although the possibility of liability might remain in some narrow circumstances, including where the upstream party created the source of danger or otherwise assumed responsibility for it; merely knowing that a general purpose AI system could be misused does not make the supplier liable for the consequences of that misuse. The UKJT considers that cases where a party in the AI supply chain might be held responsible for creating a source of danger by bringing into existence, without suitable safeguards, an AI model which is known to be capable of certain types of harm if misused are therefore likely to be rare.
- Harm arising from autonomous behaviour of foundation models: In some extreme cases, an intervening natural event (sometimes known as an act of God) may operate to eclipse the defendant's wrongdoing as the effective cause of the damage suffered. The process of an AI system learning new principles of action or behaviours could be deemed to constitute an intervening act which breaks the chain of causation between developers of AI systems and the eventual output of the system. However, the UKJT considers that the courts would be slow to reach such a conclusion, bearing in mind policy considerations which tend in favour of finding a legal person liable where harm has been caused. Liability for autonomous acts may depend on factors such as the capabilities and limitations of the AI system or model, the level of autonomy, and the degree of supervision.
False statements made by an AI chatbot
AI tools can make statements in a number of ways. The UKJT notes that typically such statements are produced by large language models (LLMs) which users access through chatbot interfaces.
On the basis that AI systems that use LLMs have no legal personality, claims arising out of their statements must be directed elsewhere. There is no general duty on legal entities not to make careless statements, but liability may arise under certain circumstances. The UKJT statement focusses on tortious routes.
The UKJT notes that, in general, liability for negligent misrepresentation requires a statement to be made "for or on behalf of" a legal person. In the case of a chatbot, the developer does not make a statement to users in a conventional sense. There is no English authority for the proposition that an AI tool may make a statement on behalf of a legal person for the purposes of a claim in negligent misstatement. However, the UKJT notes that a Canadian court has held an airline liable for false statements made by a chatbot which formed part of its website. Whether the same conclusion might be reached by an English court requires more analysis.
The UKJT considers it more likely that the court will treat a chatbot's statements as being made on behalf of a person where the AI tool's outputs are predetermined or a chatbot is presented as the authorised representative of a legal person. In the latter case, this is because there is in effect an implied representation that the AI tool's outputs can be treated as being authorised by that legal person. Where an AI tool has autonomy, this analysis may still apply, but a better analogy may perhaps be with cases where a party passes on information supplied by another. In some circumstances, it has also been held that having actual or constructive notice of the false representation is sufficient to establish liability, usually as a judicial response to a particular injustice.
It will also be necessary to establish a duty of care. This will depend on whether, objectively, there has been an assumption of responsibility to an identifiable person or group. The UKJT posits that generally a developer will need to know that the advice or information given is likely to be acted upon for a known purpose. On this basis, a developer of a general use LLM who is unaware of a specific use case will be much less likely to be exposed to liability than a developer of a bespoke product. The reasonableness of a user's reliance will also be fact-specific. Relevant factors may include: changing societal expectations of LLMs; how the tool was marketed and presented; the use case (eg for consumers or professionals); and whether usage of the tool is disclosed at all.
The UKJT notes that courts will also consider how prompts were used, as this may evidence the understanding of the prompter as well as influencing the LLM's response. The standard of care to be met will involve further fact-sensitive questions as to how the tool and its outputs were presented. As to reliance, a developer may be liable for statements generated by its chatbot for a group of people such as visitors to a website, without needing to know about a particular user or what exactly was communicated to them.
Other causes of action that may be relevant in context of AI-generated statements include the torts of deceit and defamation.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.