With average Artificial Intelligence (AI) deal size up by 48% in 2023 YTD, generative AI buzz continues to draw investors' attention. In Q2 2023, 4 out of 5 largest funding rounds went to GenAI companies, AI sector saw 7 companies reaching $1bn+ valuations with 2 out of 5 new unicorns being AI companies. The global artificial intelligence market size is projected to expand at a compound annual growth rate of 37.3% from 2023 to 2030, to reach $1,811.8 billion by 2030.

AI as a game-changer for investments landscape, is predicted to contribute to 90% of internet content by 2026. While proliferation of AI usage is a force multiplier, facilitating efficient processes, like any other innovation, it is not immune from misuse and cannot be left as an uncontrollable beast. Hence investors must include a comprehensive technical diligence and minimize the below-mentioned high-level legal risks to prevent them from blemishing the deal.

1. Accuracy, Reliability, Hallucination

Hallucination causes AI to perceive non-existent patterns or objects, generating fictional or inaccurate output. For e.g., Meta recalled its Galactica large language model demo, due to inaccurate information.

New York Court (on 22 June 2023) imposed sanctions of USD 5,000 on lawyers submitting a ChatGPT generated legal brief comprising six fictitious case citations, holding that while there is nothing inherently improper in lawyers using a reliable AI tool for assistance, lawyers' ethic rules "impose a gatekeeping role on attorneys to ensure accuracy of their filings." Hence it is imperative to review the AI tools' output.

AI's misinformation, deepfakes and malicious content, require global automated detection tools and watermarking AI content. In November 2023, Ministry of Electronics, and Information Technology, directed Meta and YouTube to take down AI generated fake content, outlining INR 1 lakh penalties, and 3 years imprisonment for deepfake/morphed videos. UK imposes a fine of 10% of global turnover for misinformation / deep fake breaches.

Mr. Anil Kapoor filed a case in Delhi High Court for inter alia, his deep fakes which represented him as an actress, Katrina Kaif. In September 2023, the court held that, "the technological tools that are now freely available make it possible for any illegal and unauthorised user to use, produce or imitate any celebrity's persona, by using any tools including Artificial Intelligence. The Court cannot turn a blind eye to such misuse of a personality's name and other elements of his persona. Dilution, tarnishment, blurring are all actionable torts which would have to be protected against".

2. Data Privacy and Cyber Security

AI relying on big data, raises critical data privacy and cybersecurity concerns, mandating clear data collection, storage, processing and transfer guidelines, and strict compliance with data protection laws, like India's Digital Personal Data Protection Act, 2023, and the European Union's (EU(s)) General Data Protection Regulation.

Highlighting AI's privacy concerns, the Illinois court (on 27 October 2023) held that "when methods of intruding into private lives and stripping anonymity outpace lawmakers' ability to address them, courts have a duty to protect the victims."

3. Liability, Accountability and Agent - Principal relation

A contract requires meeting of minds ("consensus ad idem") to attribute intention and liability, which is difficult to determine with two AI contracting entities. Consequently, globally contract laws need to be amended.

Jurisprudence on AI as an agent has evolved over time. In September 2021, a UK court required an inventor, to be a natural person while recently courts have held the AI software, and platform, to be an agent of the owner / licensor developing or deploying AI. Delhi High Court, in August 2023, held Google liable as a principal for the breach committed by its proprietary AI AdWords software.

4. Workforce displacement / Continued Human Involvement

AI tools may replace low-value deals, preliminary base-drafts, and content solely dependent upon data, formulas, and numerical information. However, numerous domains will continue to necessitate human involvement, from negotiations involving humane, intangible elements, to review of AI-generated content and high-value strategic transactions, predominantly requiring workforce upskilling. With AI as a kinetic enabler for growth, Government launched the 'Future Skills PRIME' aiming to reskill/up-skill IT Manpower for employability in 10 new/emerging technologies, including AI, blockchain, robotics, big data, and analytics, IoT, virtual reality, cyber security, cloud computing, 3D printing and web 3.0.

5. Sustainability and Energy Efficiency

Since AI tools are the heaviest carbon emitters with GPT-3 consuming 13 times more power, than a car's lifetime usage, thus AI investors should evaluate ecologically viable architectures, using optimised systems, cloud computing rather than on-premises, and mapping energy to clean locations.

Tools like Salesforce's net zero cloud and Microsoft's cloud for sustainability can assess areas for improvement. Geographically, data storage can be moved to carbon-friendly regions like Canada, where data centres operate on hydroelectricity. Google's '4 M' policy can aid in significant reduction of energy usage (by 100x) and carbon emissions (by 1000x).

6. Fair Use

AI is built upon computer programs / algorithms, developed using pre-existing data. For copyright infringement, courts consider fair use in terms of quantity and substantiality of the copied content. On 27 December 2023, New York Times filed a copyright infringement against Microsoft, alleging that Microsoft's generative AI tools rely on large-language models built by copying and using The Times's copyrighted news articles, investigations, and opinion. Delaware court (on 25 September 2023) held that even a small amount of copying may violate fair use if the copied excerpt consists of the heart of the original creative expression.

7. Ethics/ Bias

Addressing bias is crucial as underlying data sources may inadvertently contain implicit biases or be based upon wrongful surveillance, requiring curative ethical rectification.

8. Global Regulatory Compliances

In 2022, 37 out of 127 countries enacted AI related laws. EU emerging as the AI leader (with its AI Liability Directive and Draft AI Act) imposing penalties higher of EUR 40,000,000 or 7% of a company's total global annual turnover for the preceding financial year. EU and Brazil provide a robust risk based regulatory mechanism. Though US is at the forefront with passage of nine AI-related laws in 2022, US, Japan and Israel focus on self-regulation and a soft AI approach lacking sufficient teeth, whereas UK is pro-innovation. India lacks a comprehensive AI law, with a few sectors specific regulations introduced by Securities and Exchange Board of India, National Health Mission etc.

In view of the AI risks, there is an urgent need for a robust global AI regulatory framework.

Way ahead

With the Global AI Partnership Summit hosted in India in December 2023, the Law Commission of India, in conjunction with the Ministry of Electronics and Information Technology, is convening a national conference on AI on 24 February 2024. The world is on an inflection point, on the cusp of a technical revolution and AI is potentially revolutionising transactions. The investors embracing AI will steer ahead of competition, in an increasingly technology-driven landscape.

AI technology deal value has increased from USD 4.6bn to USD 12.7bn between Q1 2022- Q1 2023 and there is sufficient dry powder in undeployed capital to accelerate this trend. However, with associated risks, sufficient checks and balances are required to fully harness AI's transformative capabilities.

The content of this document do not necessarily reflect the views/position of Khaitan & Co but remain solely those of the author(s). For any further queries or follow up please contact Khaitan & Co at legalalerts@khaitanco.com