ARTICLE
16 July 2025

What Do Customers Need In Contracts For AI Products?

GW
Gowling WLG

Contributor

Gowling WLG is an international law firm built on the belief that the best way to serve clients is to be in tune with their world, aligned with their opportunity and ambitious for their success. Our 1,400+ legal professionals and support teams apply in-depth sector expertise to understand and support our clients’ businesses.
For businesses buying an artificial intelligence (AI) tool or product (together referred to as 'the product' in this article), it's crucial that the contract for the supply of the product specifically addresses...
United Kingdom Technology

For businesses buying an artificial intelligence (AI) tool or product (together referred to as 'the product' in this article), it's crucial that the contract for the supply of the product specifically addresses the fact that it is an AI product.

In this article, we explain the key contractual considerations that customers should take into account when engaging with a supplier for an AI product. We consider:

Before you jump in

Before jumping into the clauses, work out what other factors are at play:

  • Use case: The importance of different clauses in a contract will change depending on the context, use case and nature of the AI. In some scenarios, some clauses may not be relevant at all while others may be relevant to a greater or lesser extent.
  • SaaS: Many AI products are provided as software-as-a-service (SaaS). This means that many of the contractual issues will operate in that technology context, where the supplier is providing pre-existing software in a one-to-many service.
  • Supply chain: The AI supply chain could be complicated. The supplier's application is likely to have been built on top of one or more third-party foundation models.
  • Ethics: Risks relating to AI use go far beyond legal risks into ethical areas. These ethical considerations are widely recognised as responsible AI principles in the AI industry. They cover transparency and explainability; accountability and governance; robustness; fairness; safety and security; and contestability and redress.

It's best to deal with many of these non-legal risks in whole or in part at the pre-contract stage, or through active management during the life of the contract and controls put in place by customers to work alongside the contract but separate from it as customer-only actions.

Intellectual property (IP)

Each aspect of the AI product to which IP is relevant will need to be identified by the customer. The contract will need to specify ownership of the IP rights, which is likely to cover that:

  • The software's source code and the product's pre-existing model will be the supplier's IP rights.
  • Models of the product that are created from customer-specific training, using the customer's top-up training data, could either be the customer's or the supplier's IP rights. This depends on the negotiating position of the parties and the ability of the customer's model independently of the underlying model, which we will discuss in the customer-specific training section later on in this article.
  • The supplier will own the source data used by the supplier to train the supplier's model, to the extent that the supplier had, or obtained, rights to use the data.
  • The customer's IP rights will include top-up training data to the extent that there are any IP rights in the data.
  • The customer's IP rights will also include inputs or prompts inputted by the customer, as well as outputs, to the extent that any exists.

Third party claims

There are many widely publicised issues with the data used by large AI providers being scraped from the internet or otherwise used without the necessary licenses or permissions. This, plus the ability of generative AI (GenAI) tools to create results that could infringe on the IP of third parties, means it's important for customers to consider obtaining indemnification from the supplier against third party claims that the possession, use and distribution of outputs infringe that third party's IP rights.

However, it's common for suppliers to seek to limit the protection that they offer to customers, which we'll discuss in the liability section of this article.

Confidentiality

Contracts should specifically deal with the confidentiality and privacy of each of the following categories of information.

  1. Inputs and outputs

    Customers should ensure that their inputs or prompts are treated as their confidential information. This will prevent the supplier from adding the inputs into its training data, which could indirectly benefit future users of the product from the knowledge of the customer's inputs and prevent the product from providing outputs that directly use the customers' input, which could lead to competition and IP issues.

    Outputs should also be treated as confidential information for similar reasons to inputs.

  2. Top-up training data

    Top-up training data provided by the customer and customer-trained models should also be treated as the customer's confidential information for similar reasons again.

    The customer must consider whether it is acceptable for the supplier to monitor the customer's use of the product, as distinct from inputs and outputs, in the name of product improvement.

Privacy

Much privacy compliance should take place outside of the contract. This includes carrying out data protection impact assessments (DPIAs) by both customer and supplier, implementation and document by the supplier of privacy by default and design, as well as the provision of privacy notices by the controller – whichever party or parties that is.

The following issues should be handled in the contract.

  • Relevant data sharing or data processing clauses (having analysed which party is controller/processor).
  • International transfer restrictions or requirements
  • The supplier's obligations to update and provide copies of living compliance documents such as DPIAs and evidence of balancing trade-offs.

Use of customer data

Everything that is generated by the product, as well as all of the data that is provided by the customer to the supplier, will need to be covered by the definition of 'customer data'.

In addition to the IP rights and confidentiality clauses, customers will want an express prohibition against the supplier using any customer data to train either its own model or third-party models where the supplier operates an application on top of a number of large language models.

For enterprise, many suppliers offer technical implementations that provide more operational comfort to the customer in terms of access and use by the supplier to the customer data, by deploying a local version of their model within the customer's IT estate. This means that the customer data never leaves the customer environment.

Security

As with any SaaS, the security standards are likely to be those of the supplier. The customer will therefore be required to conduct their own diligence to ensure the standards are sufficient.

To have confidence that the supplier's practices are sufficiently robust and thorough, customers may require mandatory breach reporting and reporting on the testing done by the supplier, as seen in some SaaS contracts.

If an attack by AI-specific malware, which has already started to be seen, alters the logic of workings of the AI, or introduces harmful source data, customers will want to be aware of these incidents and the supplier's response to manage any resulting harmful effects.

Standards

For many aspects of AI use and risk management, standards are likely to be helpful and even necessary in the future.

These standards, as they develop and become more widely adopted, will be useful for both customers and suppliers to refer to in contracts as commitments of the measures, protocols and frameworks adhered to by each party and parties will want to include specific references to the standards used.

Widespread adoption of standards could ultimately lead to so-called trust marks or kite marks and will inform what good industry practice looks like. Many standards bodies such as ISO, IEEE and new bodies like the AI Standards Hub are actively working on producing standards for different aspects of AI operation such as bias, risk management, AI controllability, functional safety and testing.

Warranties and representations

Customers need to have assurance that the product will not perform in a way that contravenes the responsible AI principles, set out in the 'Before you jump in' section above.

However, suppliers cannot provide full assurance that issues, such as bias, hallucinations, misinformation, or unsavoury or unlawful results, will never occur. Customers therefore must address the practices they expect suppliers to have undertaken, and continue to undertake, to prevent these risks.

Warranties could include that:

  • Training data used was sufficient, accurate and unbiased.
  • The supplier had the necessary rights, licenses and permissions to use the training data.
  • Privacy by design and by default was incorporated into the development and operation of the product by the supplier.
  • The supplier carried out a DPIA and regularly updates it.
  • The supplier has carried out an ethical risk assessment.
  • The supplier has conducted and will continue to conduct, sufficient testing to establish and monitor the product's level of accuracy, robustness and safety of the product.
  • The product is capable of transparency, meaning that it can evidence the logic it used to arrive at a particular outcome.

Compliance with the law

Customers will clearly expect that the supplier has complied with the law in the development of the product and will continue to comply to the law in its operation.

Products can be put to a range of purposes, meaning many laws could be relevant to the outputs of the product and the effect it has on others. Simply requiring compliance with applicable laws at a high level is therefore unlikely to be satisfactory to the customers and they should consider setting out the specific laws that are relevant to the customer's use case with which the product must comply.

If the supplier is located outside of the UK or EU, this complexity is likely to be magnified as the laws that are relevant in the supplier's home jurisdiction may differ from those that apply in the customer's jurisdiction. Its laws may also have different standards around issues such as discrimination or copyright use. It will therefore be important for the customer to understand those discrepancies to devise other ways to manage risk.

Regulators in the UK, at least, are likely to set expectations for businesses on how they demonstrate the legal use of their products. "Applicable Law" definitions should therefore include regulatory guidance and opinions.

Laws relevant to managing AI-related risks are highly likely to change, certainly in the UK, and contracts should anticipate this change. They should set out mechanisms for one party to notify the other, and provide clarity over which party will lead and pay for the changes that are necessary for compliance. If compliance is not achievable or not realistic in a commercially acceptable manner, then each party may require termination rights.

Governance and audit

Governance mechanisms will become increasingly important in contracts for AI products for numerous reasons:

  • With AI adapting and changing as it teaches itself, it needs constant monitoring to ensure that these learnings are valid, accurate and ethical, among other considerations.
  • The specific outputs from AI technology cannot be predicted and known before they are actually computed.
  • Advances and threats are constantly being brought to market and realised as technologies continue to develop and change rapidly.
  • The risks relating to the use of AI, such as unfairness, are subjective, meaning that both customers and suppliers will want to regularly test, review and discuss the product. This will ensure that the results that the AI produces are judged to be fair as it learns and technology advances.

Customers will therefore require a much more in-depth range and scope of governance for the product with the supplier, as well as reports on a range of risks such as security incidents, bias and discrimination, inaccurate performance, hallucinations or provision of misinformation, and outputs that infringe third party IP rights.

This is predicated on the supplier conducting testing on each of these risk areas or having a 'human in the loop' review, which involves integrating human feedback into the process to assess or monitor a product.

Governance does not stop at reporting on issues; customers will also want to know how the supplier responds to and mitigates any risks that have arisen. They may also want to regulate for issues that occur as part of routine maintenance, but which will significantly impact the results produced by the product, such as the frequency of updates to the source data.

Liability

The supplier's cap on liability is likely to follow a standard SaaS approach – a percentage of the annual charges that the customer pays. Defining a breach of contract is the more significant issue, however.

Where software that is designed to perform a simple task, a supplier will be liable if their software fails to perform that task in a complete, accurate and timely manner. However, this is not so straightforward with AI, and with GenAI in particular.

Suppliers are likely to contract to provide a product with certain functionality. A customer will have to determine whether it can define the result or output it wants to achieve from using the product and link contractual obligations and charges to that result so that a supplier will be in breach if does not achieve a result, rather than just offering a product with defined functionality.

Suppliers will likely make clear in their terms what they are not responsible for such as inaccurate or infringing customer inputs, unlawful usage by customers and customer reliance on outputs.

Customer-specific training

The contract should make it clear which party is responsible for ensuring that data is properly structured and classified in a way that is compatible with the product.

If a customer has provided a supplier with top-up training data to carry out additional training on the product to refine it and produce better results for its use case, the parties will need to agree on how the customer will conduct acceptance testing on the trained product. This means testing to evaluate whether the product meets requirements and what the criteria will be. Those criteria, as with any acceptance testing, will be unique to the model and use case and could focus on accuracy, comparison against a human expert or the numbers of anomalous results. The customer will want a right to terminate if the test ultimately does not produce the results that meet the criteria.

Circuit breakers

A product should have a circuit-breaker or kill switch, which is a way to immediately override the product and allow a human to take over if that product is part of an operational workflow and its output has immediate, physical or significant results that could put safety at risk in any way, for example robots in a warehouse or software in a connected vehicle.

This functionality must of course be present in the functionality of the product, but the customer will also want contractual assurance that this functionality exists, is regularly and will work if it is activated.

Read the original article on GowlingWLG.com

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More