- within Finance and Banking topic(s)
- in European Union
- with readers working within the Banking & Credit and Healthcare industries
"Subscribe", a ubiquitous term online is a gateway to how we consume technology today. In the Software-as-a-Service (SaaS) world, a single click grants clients access to powerful tools hosted on the cloud in a cost-effective manner, all the while ensuring that the SaaS provider retains ownership over the underlying software solution. Traditionally, SaaS offerings included services like DropBox, Microsoft Office or even customer relationship management systems. However, with the surge of Artificial Intelligence (AI), SaaS is being reshaped at its core, while also raising new questions about ownership, accountability, and regulation.
Globally, governments have adopted varied approaches on handling/ regulating AI and its applications. In addition to specific laws on data protection, consumer protection and contracts, many international agencies and national governments have adopted legislation and model frameworks specifically designed to address the issues raised by use of AI. For example:
- The EU Artificial Intelligence Act (Regulation (EU) 2024/1689): This Act published by the European Union classifies AI systems by risk level, prohibits certain use cases of AI, and imposes strict obligations on high-risk applications, with extraterritorial reach for providers outside the EU.
- California SB 53 - The Transparency in Frontier Artificial Intelligence Act (TFAIA), State of California, United States: The first law within the US, dedicated to focusing on transparency, accountability, and incident reporting for advanced AI systems. The TFAIA requires a large frontier developer to write, implement, and clearly and conspicuously publish on its internet website a frontier AI framework that applies to the large frontier developer's frontier models.
- 'Measures for Identifying Artificial Intelligence-Generated Synthetic Content,', China: Effective 1 September 2025, these measures apply to all network information service providers that engage in producing, synthesizing, or distributing Artificial intelligence-generated synthetic content. Artificial intelligence-generated synthetic content refers to text, images, audio, video, virtual scenes and other information generated and synthesized using artificial intelligence technology. These measures introduce a mandatory labelling requirement for content created using artificial intelligence (AI).
- Singapore's Model AI Governance Framework, Japan's human-centered AI guidelines, and Australia's model contract clauses for AI solutions show a mix of voluntary standards and emerging statutory obligations.
The UAE has shown a keen interest in promoting the use of AI across various business segments. In 2017, under the 'UAE Centennial 2071' vision, the UAE Strategy for Artificial Intelligence 2031 was launched. In December 2022, the UAE AI Ethics Principles & Guidelines (AI Guidelines) were adopted, a non-binding framework that provides guidance on the responsible AI use across government and private sectors. Through the AI Guidelines, the UAE government emphasizes the principles of fairness, accountability, transparency, human oversight, privacy, safety, inclusiveness, and sustainability while using AI.
Amidst this backdrop of diverse regulatory approaches, it is evident that while AI offers immense potential, it also introduces unprecedented legal complexity, particularly when integrated into SaaS offerings. Considering the interplay between AI and SaaS and the associated legal nuances, this article analyzes the key modifications or inclusions that may be considered to SaaS Agreements when offering AI driven SaaS solutions.
Intellectual Property Rights
In a traditional SaaS offering, the provider typically owns proprietary application software including all modules, functions, features and derivatives; and the client is granted a license to use and access such software. In the best scenario, the client may be granted the proprietary rights associated with any deliverables developed exclusively for the client. However, this conventional allocation of rights may be affected when the SaaS provider integrates artificial intelligence (AI) solutions into its offering. If a SaaS provider relies on publicly available AI tools for their SaaS products, such usage could result in the violation of third-party intellectual property rights. Also, AI solutions are trained using vast data sets, and any violation of third-party IP rights while collecting such data sets or the ultimate use thereof, can undermine the SaaS provider's intellectual property rights over its own SaaS solution.
With the inclusion of AI in the SaaS offering, the outputs are often mechanical in nature, raising copyright ownership concerns due to the lack of human intervention. Use of AI may result in instances where the works or products may be challenged as not fulfilling the requirements of being 'original' or 'novel', thus negating any claims to seek copyright or patent protection respectively.
In the UAE, any creative work in the field of science, particularly any smart applications, software and software applications, qualifies as 'Work' under the Federal Decree Law No. 38 of 2021 on Copyright and Neighboring Rights (UAE Copyright Law). An author of such 'Work' is bestowed with the initial copyright protections. The Author may through assignment agreements assign such copyright over the Works to other persons. Despite any such assignment, the author will always be entitled to 'moral rights' in the Works. It is material to highlight that, in line with international practices, copyright protection is automatically granted upon creation of the Work and registration merely provides a stronger case of legal ownership over the Works.
Given the legal complexities surrounding artificial intelligence (AI), it is important to carefully consider how intellectual property (IP) rights and AI usage are addressed in SaaS contracts. In addition to standard IP representations and indemnifications, parties should consider the inclusion of specific representations and disclosures surrounding use of publicly available AI tools. Clauses that define ownership of IP rights over outputs generated using AI are particularly material, as some level of human involvement may be necessary to avoid challenges in securing, perfecting and enforcing those rights.
Transparency and human oversight
AI involvement in SaaS may automate decisions, provide predictions, recommendations, and may overall provide outputs that may not be fully predictable by a human user. Naturally, these outputs may lead to potential errors, algorithmic biases (based on the data sets used for training) or even unpredictability. To counter this, regulators across the globe now require increased transparency and human oversight in AI offerings. Through the AI Guidelines, the UAE government has also imposed that the use of AI must align with the principles of fairness, accountability, transparency, and human oversight. The UAE Charter for the Development and Use of Artificial Intelligence also emphasizes the irreplaceable value of human judgment and human oversight over AI, aligning with ethical values and social standards to correct any errors or biases that may arise through the use of AI.
Furthermore, certain specialized sectors have imposed additional requirements on how AI solutions may be used in providing services to customers/ end-users. For example, the UAE's Securities and Commodities Authority (SCA), recently implemented amendments to specifically cater to 'mathematical programs driven by artificial intelligence' that may be used by licensed financial institutions to undertake financial advisory or asset management services. Through these amendments, SCA has mandated that such entities appoint officers specialized in monitoring the platform's algorithmic behavior to ensure that the end-users are not 'led to make biased, random, or ill-considered investment decisions as a result of hidden biases or illusions of certainty'. Additionally, such systems are due to be regularly audited to ensure that such concerns do not arise.
Also, high risk AI systems, especially those deployed in the healthcare and fintech segments, would need to undergo frequent and continuous conformity assessments and internal audits. Serious incidents/malfunctions may need to be reported to regulators along with reports on AI performance, risk, and human oversight. These requirements would demand strenuous record keeping obligations to be complied with by the AI providers including technical documentation such as datasets, model parameters, testing logs, audit logs and any risk management documentation. While the UAE does not have a unified AI law, the sector specific laws (such as the SCA amendments on RoboAdvisory) and other policy positions and guidelines, including the AI Guidelines, DIFC AI guidelines, and ADGM AI Ethics encourage governance, audit and monitoring systems for ensuring AI fairness, performance and accuracy.
Therefore, during the negotiation of SaaS contracts for such platforms in high-risk use cases, the parties should discuss and agree on the disclosure, audit, quality assurance and record-keeping parameters, especially relating to the SaaS platform's performance and accuracy. Caution should be taken to include provisions that provide for minimum levels of human oversight and supervision and the responsibilities of each party in this respect.
Data protection:
The UAE has enacted the Federal Decree-Law No. 45 of 2021 on the Protection of Personal Data (PDPL), which governs how personal data is processed (which includes a host of activities like collection, storage, and transfer) in the UAE, directly impacting SaaS platforms and AI systems handling personal data. When SaaS integrates AI capabilities (analytics, predictive modeling, generative AI, biometric recognition, etc.), data protection considerations become pronounced because AI often requires large datasets that may involve processing and/or using personal data or sensitive personal data (e.g. biometric data, health related data,), or even raise issues of automated decision-making, profiling, explainability and fairness, which traditional SaaS does not.
In the UAE, for SaaS platforms deploying AI, compliance with the PDPL is mandatory, and applies to even those players not based in the UAE but serving users and customers based in the UAE and involving processing of their personal data. The PDPL regulates the handling of personal data, with focus on automated decision-making and profiling. The PDPL mandates transparency, data minimization, and the right to object to automated processing. It also restricts international data transfers. The PDPL also imposes retention limitations, and hence, SaaS agreements should expressly include provisions for data retention periods, secure deletion post the expiry of such period.
In healthcare, SaaS agreements may trigger industry-specific laws, such as the Federal Decree Law No. (2) of 2019 Concerning the Use of the Information and Communications Technology in Health Fields or the Federal Decree-Law No. (34) of 2021 On Countering Rumors and Cybercrimes. If AI models involve automated processing that could adversely affect users, the end users must be provided with the ability to object to such processing.
SaaS players and especially those with AI involved in their SaaS must be cautious to align their contracts with the PDPL and maintain a robust, publicly disclosed privacy notice to ensure compliance. Failure to abide by applicable law may pave way to potential contractual breaches and claims, consequent reporting to authorities and may even ultimately follow with legal claims and heavy penalties.
Liabilities and Indemnity:
The importance of clauses on Limitation of Liability and Indemnity in SaaS agreements cannot be overemphasized. A limitation of liability clause restricts a provider's liability in cases of default, such as server downtime, AI or human errors, or data breaches. While zero liability may not be enforceable, a well-drafted clause can limit the quantum of liability or types of damages payable if the provider is at fault. This is particularly critical in SaaS and AI contracts, where outcomes can be unpredictable and data and IP risks are high.
In the UAE, liability provisions are governed by the general contracting principles under the Federal Decree Law No. 5 of 1985 (UAE Civil Code). Under the UAE Civil Code, parties may agree on a quantum of damages in advance, but courts may amend such agreements (at a party's request) to avoid unfair prejudice. Any agreement deemed unjust may be voided, and therefore, liability limitation clauses should be balanced, and may typically cap liability at the amount paid by the customer, reducing the liability period, and excluding third-party claims or unauthorized use. In contrast, an indemnity clause would protect against another party's risks. To put it simply, indemnity means one party agreeing to "make good" the loss suffered by the other due to certain events or actions regardless of fault. Typically, indemnity may be offered against third party claims, data breach losses, regulatory fines, or negligence or misconduct. In SaaS contracts, indemnity is focused on protecting oneself from third-party IP claims for use of AI-generated content or software. Just like its counterpart, the 'limitation of liability' clause, the indemnity provisions are also governed by the UAE Civil Code and operate on the same principle where the parties may agree in advance to compensation, provided that it should be limited just to parties to a contract, or else it could be reassessed by competent courts and voided.
Generally, in a SaaS contract the indemnity clauses flow from the SaaS provider to the client, but it is equally important to include indemnities from the client to the SaaS provider, especially where client-controlled inputs or conduct can expose the SaaS provider to liability. In SaaS contracts, indemnity often focuses on protecting against third-party IP claims arising from AI-generated content or software. Indemnity should also flow from the client to the provider, particularly when client inputs (e.g., uploaded content or datasets) expose the provider to risks such as IP infringement, data protection violations, defamation, or breaches of export controls. The PDPL and GDPR both hold SaaS providers liable only when acting outside the client's lawful instructions, reinforcing the need for mutual indemnity clauses.
It is imperative to note that under some international legislation, certain use cases of AI may not be permitted despite the SaaS provider offering indemnities and warranties. For example, the EU AI Act strictly prohibits any AI systems that exploit vulnerabilities of specific groups (e.g. due to age, disability, social or economic situation) to distort their behavior and cause harm. The UAE does not have a consolidated framework with such restrictions, but one may refer to the PDPL under which unlawful processing and profiling are limited, or the UAE AI Strategy 2031, DIFC/ADGM AI principles that discourage bias, opaque decision-making, or manipulation. Global bodies like UNESCO have vide UNESCO AI Ethics (2021) has prohibited the use of AI for mass surveillance, discrimination, or social scoring. The OECD AI Principles (2019) also ban AI that undermines human rights, or safety. Therefore, using SaaS solutions, both the SaaS provider and the client must assess the possible use cases, the manner of use and impact of use of the solution. In this respect, 'fitness for purpose' of a solution and other disclaimer of warranties become a critical point of discussion.
Conclusion:
SaaS agreements have long focused on issues of service delivery, liability and indemnity, performance and intellectual property. The introduction of AI fundamentally shifts the risk landscape and warrants a relook at many of the provisions in the SaaS Agreement. An AI SaaS agreement must extend safety net to cover the integrity, fairness, and legality of decision-making itself. Without these added layers, businesses risk contractual gaps, regulatory exposure, and reputational harm in an increasingly AI-regulated world.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.