Transactional Guide to AI

Editor's Note: This Overview discusses the emerging legal context of AI especially salient to counsel advising on and negotiating mergers and acquisitions and other commercial transactions and drafting related documents.

See M&A, Checklist - Supplemental AI Due Diligence Checklist and Sample Clauses: Representations & Warranties - AI in Transactions following this Overview for companion documents.

Before the expected self-improving, self-perfecting and, ultimately, self-perpetuating AI eliminates lesser life forms, such as lawyers, we hasten to proffer a practical legal approach to AI for technology transactions and acquisitions. AI requires legal and technological context-driven and customized diligence queries and representations and warranties (RW).

AI is rapidly becoming ubiquitous, dominating the headlines and exponentially impacting technology, business, public policy and law as few would have predicted. The US Patent and Technology Office (USPTO) has requested public comments on ensuring the ongoing encouragement and incentivization of AI innovation. Its recent analysis found that 80,000 of utility patent applications in 2020 involved AI, which is 150% higher than in 2002. AI now appears in 18% of all utility patent applications. Remarkably, this is more than 50% of all the technologies examined at the USPTO."

Much of what is now labeled AI is not necessarily AI. To qualify as AI today, the general consensus is that technology must exhibit some level of adaptable self-learning and improving. The boundaries of AI will shift as technology evolves. Machine Learning (ML) algorithms in AI allow computers to learn by training on data inputs, including to recognize patterns and make predictions. ML uses artificial neural networks (ANN), mimicking the neural networks in the human brain, to process data with limited or no human intervention. ANN consists of three layers of input: initial data, hidden (where the computation occurs) and output (the results generated by inputs).

Legal Context

ML AI technologies and platforms, such as ChatGPT and Midjourney, raise novel legal issues, especially concerning copyrights, patents, liability, and privacy. Governmental bodies in the EU and US are grappling with the momentous public policy implication of AI, contemplating groundbreaking responsive AI regulations and controls.

Intellectual Property

Generative ML technologies train on big data-e.g., text, audio, images, software codes and videos-scraped from the internet, frequently without permission from their copyright owners. As copyright owners allege infringement, defendants are expected to use fair use defenses-that the nature of outputs are transformative uses that do not compete with the copyrighted works. Several lawsuits are ongoing. See, e.g., Getty Images (US), Inc. v. Stability AI, Inc. No. 1:23-CV-00135 (D. Del. Feb. 3, 2023); J. Doe 1 v. GitHub, Inc., No. 4:22-CV-06823 (N.D. Cal. Nov. 3, 2022); and Andersen, v. Stability AI Ltd., No. 3:2023-CV-00201 (N.D. Cal. Jan. 13, 2023).

In the US, copyright protection can only be granted to a human being, not a machine. Copyright protection is possible for machine generated work, but requires substantial human contribution. The US Copyright Office issued guidance on protection for Works Containing Material Generated by Artificial Intelligence. The key criterion for US copyright protection is whether the work is one of human authorship, with the computer merely being an assisting instrument, or whether the traditional elements of authorship in the work-literary, artistic, or musical expression or elements of selection, arrangement, etc.-were actually conceived and executed not by human, but by machine. Notably, human authorship is not required for copyright protection in many other countries, including U.K., Ireland, Hong Kong, India, New Zealand, and South Africa, all of which may grant copyright protection for computer generated work, thereby requiring careful proactive legal counsel, setting the stage for forum shopping and a conflicts-of-law imbroglio.

Whether the AI technology company or the user owns the content or prompt inputted into the AI machine in order to generate outputs is an equally pivotal controversial issue. Significantly, contractual terms now assume a decisive role in the identification and mitigation of AI legal risk and liability and AI input and output ownership allocation. The terms of use for many generative AI technologies to date, such as ChatGPT and Midjourney, elaborately protect the AI technology company with provisions disclaiming all warranties, liabilities, and indemnities.

Plaintiff copyright owners typically assert that AI outputs are derivative works of their contents used as training data. In response, some defendants assert that plaintiffs failed to identify specific copied works. Since AI technologies ingest colossal amounts of training data, plaintiffs may struggle to identify the allegedly infringing works. Notably, however, some AI platforms, such as ChatGPT, expressly assign and convey output ownership to the user, thereby dramatically increasing their commercial appeal to users. Interestingly, the EU's proposed AI Act indicates that the creators of generative AI technologies may need to disclose the copyrighted materials used to train their AI technologies. Such disclosure requirement may provide copyright holders an opportunity to receive portions of the work's profits.

Liability for Harm

A major consideration, which will ultimately dramatically impact the commercial, use, and adoption potential of AI, is who bears the liability for harm caused by AI products or services. In the recent Supreme Court case Twitter v. Taamneh, the unanimous decision favored defendants Twitter, Google and Facebook, stating that the failure of the defendants to remove ISIS content from their platforms is not the same as intentionally providing substantial aid to ISIS under the Antiterrorism Act. Twitter, Inc. v. Taamneh, 143 S. Ct. 1206 (2023).

As a result, another terrorism liability case against Google based on Section 230 of the Communications Decency Act of 1996, which generally shields technology companies from liability for content published by users, was remanded by the Supreme Court indicating that the plaintiff's liability claims against Google are unlikely to survive. Gonzalez v. Google LLC, No. 21-1333., 2023 BL 169686, (U.S. May 18, 2023). However, the court acknowledged the challenges of applying Section 230 to today's internet landscape of highly complex algorithms.

The extent of liability of generative AI platform outputs are yet to be determined. Cases will likely turn the level of control, involvement and direction of the AI company and the actions and the responsibility of the users.

AI-enabled medical devices also raise new questions on the applicability of traditional tort theories, such as negligence, product liability, and strict liability, and whether AI/ML-based medical software would be litigated under medical malpractice or products liability. We can expect product liability to apply to developers and manufacturers across the AI distribution chain, and medical malpractice and other negligence liabilities to apply to health care providers from deviating from their standard of patient care. The standard applicable to AI-enabled medical devices will have to be developed over time.

Privacy

Privacy, civil rights, and algorithm bias are other major AI concerns. The White House published the Blueprint for an AI Bill of Rights to guide the design, use, and deployment of automated systems to protect the public from AI algorithm discrimination, abusive data practices that intrude on privacy, and inappropriate or irrelevant data use that can lead to harm.

The currently stalled federal American Data Protection and Privacy Act (ADPPA) provides rules on use of personal data. If passed, it would require covered entities and service providers to conduct algorithm design evaluations and impact assessments describing the algorithm's design, process, purpose, foreseeable uses, data inputs, and the outputs the algorithms generate. While the ADPPA is on pause, many states are pursuing, and some have passed, legislation relating to AI use and algorithm discrimination.

The EU's proposed AI Act also covers bans on intrusive and discriminatory uses of AI, such as predictive policing systems based on profiling and biometric surveillance. Other jurisdictions, such as Brazil and China, are working on their own bills to regulate AI.

Click here to continue reading . . .

Originally published by Bloomberg Law.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.