Navigating The Differences Between AI Agreements And More Traditional IT Agreements

H
HLK

Contributor

HLK is a global cooperation combining Haseltine Lake Kempner LLP and HL Kempner Partnerschaft mbB and provides a full suite of IP services advising across the entire IPR Lifespan™ in all technical and scientific disciplines. With offices in London, Bristol, Munich, Leeds, Glasgow, and Guangzhou (China), HLK provides IP services across the globe. HLK’s resources and expertise are exclusively dedicated to IP protection: safeguarding the inventions, creative designs, brand identities and other innovations of its clients. HLK advises on the strategy, identification, protection, opposition and appeal, exploitation and enforcement of IP rights, and defends its clients from allegations of infringement by focusing on acquiring competitive advantage for its clients. HLK is privileged to work with some of the most exciting and forward-looking businesses in the world which are at the forefront of innovation and product development in their various spheres.
IT agreements are used to set out certain expectations in relation to delivery of IT services and products. The purpose of such an agreement is to provide clarity and certainty...
UK Technology
To print this article, all you need is to be registered or login on Mondaq.com.

IT agreements are used to set out certain expectations in relation to delivery of IT services and products. The purpose of such an agreement is to provide clarity and certainty, thereby fostering an effective and positive business relationship.

So, what are the differences between the terms of AI agreements and those of traditional IT agreements?

Louise Perkin and Sophie Harrison delve into the answer, which is not straightforward because much depends on the specific AI and IT tools and the point in time at which the agreement is being negotiated (legislative changes, industry preferences, etc.).

At a high level, AI and IT agreements are very similar in that they require inclusion of much of the same provisions – for example: specifications, performance warranties, indemnities, and protections for confidential information, personal data and intellectual property (IP).

The differences between the agreements for AI and IT arise when considering the detail of many of these provisions. Due to a combination of AI's learning and generative nature and the novelty of this learning and generative nature (particularly where the law and ethics are concerned) achieving certainty and clarity in these provisions requires not just more detailed drafting but, in most cases, a fresh approach to assessing and mitigating legal risk.

While each agreement will turn on the specific facts and parties in that matter, some key general differences are as follows.

Ownership and protection of personal data and confidential information

In most IT tools, data sets can be tracked through the tool, thereby allowing for ready identification of ownership of, and effective implementation of protections / controls for, those data sets. This is not so easy with AI tools. The way in which AI tools learn from, and reproduce, data makes it much harder to track data sets from input to output, and it gives rise to a risk of personal data or sensitive information accidentally being shared with third parties. Therefore, in AI agreements, a much more tailored, detailed and complex agreement regarding ownership, controls / protections, warranties and indemnities is needed.For example, a party may specify that an AI should be capable of unlearning. In some examples, warranties may be required to ensure that there is no leakage of training data.

Intellectual property (IP) rights

AI poses a number of challenges for IP rights in agreements.

  1. IP rights of the contracting parties
    Tracing inputs and developments in non-AI software is well-established, and so agreeing ownership and IP rights is generally not a problem. AI's learning and generative nature makes this much trickier – for example: end users of an AI licence can unintentionally enhance the AI tool by inputting data as part of their daily use, but identifying the features of those developments that belong to that specific end user and that belong to another end user / the licensor (and the extent to which those developments can be shared with others) is not straightforward. One also needs to consider the implications of ownership and the parties' priorities in terms of the other provisions, such as the impact ownership may have on the extent / validity of warranties and on rights over data inputs.
  2. IP rights of third parties
    IP law has not kept pace with AI developments. In particular, copyright and the use of data for training AI is an unsettled area of law. AI agreements need to protect parties from third party copyright infringement claims while still being commercially appealing. The mechanism (warranties, indemnities, audits) employed in AI agreements to achieve this will differ depending on the parties' risk appetites, financial circumstances and the efficacy of the licensor's existing processes for identifying and respecting third party rights, as well as the changing legal landscape (see below).
  3. Ownership of copyright in AI outputs
    Protection of copyright is standard in IT agreements, but when it comes to AI agreements the issues to be considered and negotiated are more complex . While in the UK it is generally accepted that AI output falls within the category of "computer generated work" in the Copyright, Designs and Patents Act 1988, determining whether the AI output meets the criteria for copyright protection and, if it does, who owns that copyright are legal grey areas. This has numerous implications for developers and users of AI – not least on their rights to use the output, royalties, liability exposure, and indemnities. In the absence of imminent legislative clarity, establishing contractual certainty on these issues via thorough and carefully drafted AI agreements will be key to fruitful business relationships.
    Overall, provisions governing IP rights in AI agreements will (at least for now) differ from IT agreements in the following way: more complex and nuanced; more detailed pre-agreement risk assessments and negotiations; and a greater influence on the terms of the agreement as a whole.

Performance warranties

Typically, in IT agreements, warranties relate to the agreed tool specifications. Precise specifications are not quite as reliable in AI agreements because of AI's learning and generative (and, at times, unpredictable) nature. Warranties need to be carefully considered based on each user's priorities and needs, and effective benchmarks / metrics need to be identified and agreed. Perhaps, for example, a user may wish to have accuracy metrics and controls, or agreed audits, or simply (like in some IT agreements) a warranted period of tool availability.

Legislative uncertainty and ethical awareness

As briefly touched upon already, IP law has not kept pace with AI developments and is consequently going through a period of change. Additionally, there is growing awareness of the ethical considerations of using AI in certain situations, with a consequent growth in frameworks classifying AI according to risk and responsibility. Unsettled law and emerging frameworks not only require careful drafting of those terms that may be affected by these changes, but also requires agreements to continuously be under review so that they align with any changes in the legal and ethical landscape.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More