In this quarterly update, we review the latest developments in three subjects salient to corporate use of artificial intelligence (AI). First, we discuss the risks associated with AI, the case for board oversight and how the board can exercise oversight over management's implementation of AI. Second, we review recent trends in AI Intellectual Property (IP) litigation. Finally, we provide an overview of three state laws that broadly regulate AI and emerging topics in potential AI legislation.
AI Oversight for Boards of Directors
Why Is AI an Issue for the Board?
Boards are responsible for overseeing a company's overall affairs and risk strategy. When a company uses a new technology as impactful as AI, the board has a responsibility to ensure those uses are in line with the company's objectives and that the inherent risks are managed.
Boards are responsible for overseeing a company's compliance with its legal and ethical obligations. At least three states have passed legislation regulating AI use in the private sector, with AI bills from at least 35 states introduced in 2024. Several countries around the world have also passed AI legislation, including new laws in China, Canada and the European Union. And the U.S. recently signed a treaty with the EU, the U.K., Canada, Australia, Israel, Japan and several other countries to enshrine individual rights against harmful AI use, while tasking participating countries with addressing AI-related risks in the private sector.
Under existing U.S. laws and regulations, both the Federal Trade Commission (FTC) and the Securities and Exchange Commission (SEC) warned companies against using AI improperly or misleading the public about such use. The SEC settled charges against two investment advisors in March 2024 for misrepresentations regarding AI use. In June 2024 the SEC settled charges against an AI recruitment startup for false claims regarding AI, which the SEC said constituted "an old school fraud using new school buzzwords like 'artificial intelligence' and 'automation.'"
A public company's disclosures regarding AI must be accurate and fairly describe the risks. In June 2024, the SEC's director of the Division of Corporation Finance highlighted possible AI disclosure requirements in a company's 10-K filing, including in sections that disclose the description of business, risk factors, management's discussion and analysis, financial statements, and the board's role in risk oversight. SEC Chair Gary Gensler has repeatedly warned companies against "AI washing," including as recently as in an "Office Hours" video the SEC posted on Sept. 9, 2024, in which Chair Gensler reinforced the need for accuracy whenever a company discusses its use of AI, stating, "Investment advisors, broker-dealers also should not mislead the public by saying they're using AI when they're not, nor say that they're using it in a particular way and not do so."
Class action securities litigation has also been filed against companies over their AI disclosures, including at least lending marketplace Upstart, code development platform GitLab and robotic process automation company UiPath, which offers tools to automate repetitive business tasks. Plaintiffs in each of these complaints allege that the defendants made materially false or misleading statements regarding their use of AI, in violation of Sections 10(b) and 20(a) of the Securities Exchange Act, causing plaintiffs to purchase the defendants' shares at inflated prices.
Ignoring the risks of AI could expose the board to potential liability. Failing to oversee "mission critical" company risks has led to derivative claims for breach of the board's duty of loyalty under the Caremark doctrine. For example, Delaware courts have recently examined Caremark claims filed against boards for their alleged failure to adequately oversee cybersecurity risks. Although to date, Caremark liability has been limited to cases where the board failed to oversee risks that resulted in a violation of law, the proliferation of new AI laws and regulations, along with the inherent risks that come with AI use, underscore the potential for a lack of AI oversight to form the basis for a Caremark claim.
AI Risks
Board oversight of "mission critical" risks regarding AI is complicated by the breadth and depth of its potential uses inside the company, which may span the finance, legal, product development, marketing, compliance, operations and supply-chain departments. Customer confidence and the company's reputation may also be on the line. Moreover, while the board should stay informed and exercise oversight of key risks, it is up to management to implement the company's strategies and maintain operational authority over its business. That said, AI-related risks that boards should consider may include:
- Bias and Discrimination: Preventing bias and
discrimination has emerged as a top concern for legislators seeking
to regulate AI (see below). Human review or audits of AI systems
are required by most new AI laws, especially when AI is used to
provide "essential goods and services" such as housing,
health care, employment, insurance, credit or financial services,
legal services, and government programs or assistance.
- Transparency: After anti-bias measures,
transparency appears to be the second-biggest concern for AI
regulators. Most new and proposed AI laws in the U.S. require
companies to inform the public if the content they are interacting
with is AI-generated. The European Union's AI Act also requires
transparency and labeling for all AI uses that are not deemed
high-risk, and all high-risk uses in the EU are subject to
transparency requirements and stringent anti-bias controls.
- Responsibility and Accountability:
AI-generated content may not always be current or accurate.
Companies should ensure that they have processes in place to verify
the accuracy and appropriateness of AI outputs. Failure to do so
may lead to inefficiencies or erode public trust. Further, most
emerging AI legislation will require some form of recordkeeping and
accountability measures.
- Confidentiality: There should be no
expectation of confidentiality for any AI inputs. Using AI with
confidential information may result in a range of consequences,
including violations of confidentiality agreements, loss of
protection for company secrets, loss in revenue, damage to the
company's image and reputation, and adverse legal action or
penalties.
- Data Privacy: Many existing privacy laws
restrict automated analysis of personal data to make decisions that
have significant effects on consumers. Privacy laws require
companies to protect personal data from exposure and to disclose
the sources of data used, the purposes for such use and how
personal data is shared. Companies that use AI on personal data
should implement policies to ensure compliance with privacy
laws.
- Intellectual Property and Terms of Use: Using AI to generate company materials may raise questions over the ownership of such materials or may infringe copyright and other intellectual property rights of third parties, as further discussed below.
Board members can help harness the benefits of AI while mitigating these risks by ensuring that the company has effective safeguards to prevent AI misuse.
What Can Board Members Do?
There are several steps that boards should consider to ensure safe and effective AI deployment:
- Understand AI: Board members should gain a
basic understanding of what AI is, how it works and what benefits
it might provide the company. Boards should consider recruiting
members with knowledge of AI to identify opportunities with
management and how to minimize the associated risks.
- Shape AI Strategy: The board should help shape
the company's strategy for AI use and ensure that it fits
within the company's overall mission, objectives and ethical
standards. The board should familiarize itself with AI
opportunities and receive regular reports from management on
potential AI uses and benefits.
- Set Risk Tolerance: Once the board understands
how AI works and how it may benefit the company, it should help set
the company's risk tolerance for AI to ensure that any
potential benefits are worth the related risks.
- Track Regulatory Trends: The board should keep
an eye on AI-specific regulations and existing laws that affect AI
use, such as privacy and intellectual property laws and FTC or SEC
guidance, as well as trends in legislative movements for future AI
laws. Existing and future laws should shape and direct the
company's AI deployment and governance strategy.
- Establish AI Guardrails: The board should
review the company's AI policies and procedures to ensure
compliance with organizational risk tolerance. The board should
periodically review high-level policies and, in particular,
whenever the company deploys a new AI tool or makes substantive
changes to its AI use. The board should consider whether the
company has devoted sufficient resources to managing AI use.
- Update Business Continuity Plans: The board
should merge AI considerations into its existing data management,
incident response and business continuity plans and oversee
recovery strategies for any security incident involving AI.
- Implement Periodic Updates: The board should institute ongoing oversight by including AI as a topic in its regular meeting agendas or receive periodic updates from management on the company's AI use. These updates may include reports of internal audits or testing to ensure that the company's AI use is safe and effective, and may also include reports from subject-matter experts or input from different departments, including IT, HR, Legal, Risk, Privacy and Compliance, as needed.
Resources are available to help boards accomplish these tasks. The White House Office of Science and Technology Policy issued an AI Bill of Rights that expresses five principles for safe and effective AI use and explains related risks. The National Institute of Standards and Technology (NIST) released an AI Risk Management Framework that is intended to help companies across sectors in "designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems." And the International Organization for Standardization (ISO) has published a number of AI-governance materials, from a basic primer on how AI works to a technical manual on how to ensure responsible AI governance and deployment (ISO/IEC 42001).
For most companies, it is not too early for the board to consider AI opportunities and how to oversee management's implementation of AI in the organization.
IP Litigation Regarding AI
Copyright
Copyright issues have emerged as the leading grounds for IP disputes over AI. While most of these cases allege infringement, at least one case sheds light on authorship issues of AI-generated works (as well as patent inventorship issues).
Authorship
In 2022, Dr. Stephen Thaler filed a complaint against the U.S. Copyright Office (USCO) challenging the denial of his application for registration because he claimed the artwork was authored by an algorithm he built and owned, and the copyright should transfer to him as a work-for-hire. The District Court for the District of Columbia granted summary judgment against Thaler, finding "human authorship is an essential part of a valid copyright claim" based on "centuries of settled understanding" and the U.S. Constitution. Thaler filed for appeal in October 2023, and oral arguments took place this month in the Court of Appeals for the D.C. Circuit.
During oral arguments, Thaler's counsel argued that an 1884 U.S. Supreme Court case concerning a portrait of Oscar Wilde, on which the USCO relied, stood instead for the principle that the Copyright Act should be read expansively to accommodate technological advancements. Judges on the panel questioned why Thaler did not list himself as the author on the registration form and suggested that he may have waived his work-for-hire arguments. Several professors acting as amici in the case argued that failing to extend copyright protection to AI-generated work may stifle innovation and force creative companies overseas to protect their works.
Likely in response to Thaler's complaint, in March 2023, the USCO issued guidance to "clarify its practices for examining and registering works that contain" AI-generated material. This guidance states that the USCO may grant registrations for works created in part with AI tools — where a human is listed as an author — but it will not grant registration to works that are wholly generated by AI. It is up to the applicant to disclose the use of generative AI in the work and explain "the human author's contributions."
Infringement Actions by Authors, Artists and Publishers
A number of infringement actions have been filed by authors and artists against AI developers, claiming that the AI platforms were improperly trained on copyrighted works to produce outputs "in the style" of the plaintiffs. In an early infringement action, Andersen v. Stability AI Ltd., the district court dismissed theories of vicarious and induced infringement related to AI-generated outputs, but left intact the direct infringement theories based on the unlicensed copying of "millions" of images to train the AI models.
Later cases have similarly been trimmed to their direct infringement theories based on unlicensed copying by AI. For example, in a consolidated action against OpenAI that includes the headline-making case by comedian Sarah Silverman, plaintiffs dropped their vicarious infringement theories in favor of a single count for direct infringement. The amended complaint in In Re OpenAI ChatGPT Litigation now alleges that defendants "copied and ingested" massive amounts of text from copyrighted literary works to train ChatGPT. Similar cases have been filed in the Northern District of California in April and May of 2024, against companies such as Google, Databricks and Nvidia, all of which allege direct infringement based on literary or visual works copied and used to train AI.
Publishers and media companies have begun filing copyright claims against AI developers, but have not chosen to limit their theories to direct infringement based on copied training material. This may be because the publishers filed these cases outside the Northern District of California, where the Andersen court dismissed induced and vicarious infringement theories. For example, The New York Times filed a complaint against OpenAI and Microsoft in the Southern District of New York in December 2023, alleging direct, vicarious and contributory infringement based on unlicensed copying of New York Times publications to train ChatGPT. Getty Images filed a second amended complaint against Stability AI in Delaware in July 2024, alleging "brazen infringement" on a "staggering scale" based on the copying of "more than 12 million photographs from Getty Images' collection, along with the associated captions and metadata" for the purpose of creating a competing business. And in Concord Music Group, Inc. v. Anthropic PBC, a group of music publishers filed suit in the Middle District of Tennessee in October 2023, alleging that the defendant copied musical works and lyrics to train its chatbot.
Many of these copyright infringement actions are still in the early stages of litigation. It remains to be seen how courts will treat the various theories of copyright infringement for online works used to train AI systems.
Source Code Infringement and Open-Source Issues
Infringement suits against AI developers have not been limited to works of art. In November 2023, a putative class led by anonymous plaintiffs sued GitHub and Microsoft, alleging the defendants used copyrighted source code that was stored on code repository GitHub to create their generative AI source code products Codex and Copilot. These plaintiffs further allege that GitHub and Microsoft failed to comply with the open-source software licenses governing that code, which often permits the copying of source code so long as the user attributes the code to its source and includes a copy of the license and notice of copyright in any downstream use. Finally, the complaint against GitHub also included claims for violating the Digital Millennium Copyright Act (DMCA) and breach of contract. The district court dismissed the DMCA claims with prejudice on June 24, 2024, because, after multiple attempts, the plaintiffs "failed to show their code was reproduced identically."
Digital Millennium Copyright Act
Many early copyright infringement actions alleged violations of the DMCA, which prohibits the removal or alteration of Copyright Management Information (CMI) with the knowledge that doing so "will induce, enable, facilitate or conceal an infringement" of copyright. For example, the Andersen plaintiffs alleged that Stability AI not only copied works to train the AI model but also web-scraped related CMI such as the title of the work, the name of the author and other identifying data. The Andersen court dismissed the DMCA claims, but granted leave to amend for plaintiffs to identify what particular types of CMI were allegedly removed and to allege sufficient facts to show that each defendant had the requisite knowledge that doing so would aid infringement.
Following the Andersen dismissal, some plaintiffs have abandoned their DMCA claims in favor of actions for direct infringement for unlicensed copying to train the AI. However, certain cases filed outside of the district where Andersen was decided, including the Getty Images, New York Times and Concord Music Group actions, all continue to assert DMCA claims.
Trademark Dilution
In addition to copyright claims, the New York Times and Getty Images complaints include causes of action for trademark infringement and trademark dilution. For example, Getty alleges that using its images to train AI has resulted in AI-generated outputs that contain a modified or distorted version of the Getty watermark, which creates confusion and falsely implies that the AI output is associated with Getty. Getty further alleges that some images generated by the defendant's AI are "of much lower quality" and range "from the bizarre to the grotesque" or offensive, which when combined with the Getty watermark results in trademark dilution.
Patent
In addition to his application for copyright registration naming AI as an author (discussed above), Dr. Thaler filed two patent applications naming AI software as the sole inventor. The U.S. Patent and Trademark Office (USPTO) rejected both applications, and the district court affirmed, finding that Congress intended to limit the definition of "inventor" to a natural person or human being.
On appeal, the Federal Circuit explained that the Patent Act defines an inventor as "the individual ... who invented or discovered the subject matter" and under Supreme Court precedent an "individual" means a human being, absent indication that Congress intended otherwise. But the Federal Circuit also left the door open for AI-assisted inventions, stating that it was not faced with "the question of whether inventions made by human beings with the assistance of AI are eligible for patent protection."
In February 2024, the USPTO issued guidance clarifying that the use of AI in the inventive process does not preclude an invention from patentability "if the natural person(s) contributed significantly to the claimed invention."
State Law AI Update
Existing State Laws
So far three states — Colorado, Utah and Tennessee — have passed laws aimed at regulating AI in the private sector. Colorado's AI Act is the most robust of the three and seeks to prevent algorithmic discrimination arising from the use of high-risk AI systems. It defines a high-risk AI system as one that becomes a substantial factor in making a "consequential decision," which is a decision having a "material legal or similarly significant effect" on the provision of educational or employment opportunities, financial or lending services, essential government services, health care, housing, insurance, or legal services. Colorado requires both developers and deployers of such systems to protect consumers from the risks of algorithmic discrimination, including with bias impact assessments, risk management policies, by notifying consumers of AI use for consequential decisions, and analyzing and publishing statements regarding foreseeable risks and mitigation measures.
Utah's Artificial Intelligence Policy Act is a transparency law that creates two sets of AI disclosure requirements. Members of a "regulated" occupation (one that requires a license or certification) must "prominently disclose" to a customer at the outset that he or she is interacting with generative AI. For all other commercial uses, the deployer of an AI system must "clearly and conspicuously disclose" to the customer that he or she is interacting with generative AI, but only if prompted or asked.
Tennessee's Ensuring Likeness, Voice and Image Security (ELVIS) Act aims to protect the music industry and artists by prohibiting "deepfakes," which include AI-generated content of a person's likeness and voice to create fake video or audio clips. Tennessee expanded the individual property rights in one's name, photograph or likeness to encompass the use of one's voice, whether actual or simulated, and created a private right of action against anyone who publishes, distributes or otherwise makes deepfakes available to the public, or any software tools whose primary purpose is to create deepfakes, without prior authorization from the individual.
A few states have passed laws that affect AI use, but on a more limited scope. For example, the Illinois Artificial Intelligence Video Interview Act regulates the use of AI on video interviews with job applicants and requires notice and consent from the interviewee after an explanation of how AI will be used on their video or to evaluate their candidacy. Other states, including California, Connecticut, Louisiana, Texas and Vermont, have enacted laws directing government agencies to study how AI is used and report back to the governor on any unintended or harmful consequences. The general goal of the state laws requiring government analysis is to inform an eventual "AI code of ethics" or other potential legislation that will address the identified concerns, such as the AI laws passed this year in Colorado, Utah and Tennessee.
Legislative Trends and Developments
At least 35 bills aimed at broadly regulating the use of AI in the private sector were introduced in state legislatures in 2024, across 18 states. These bills ran the gamut from simple transparency requirements like those found in the Utah AI law to strict antidiscrimination requirements like those found in the Colorado law. The overwhelming majority of these bills would have required AI developers to label their outputs as AI-generated, prevent bias or discrimination, and maintain some form of recordkeeping or assessments to document their anti-bias measures.
Last week Gov. Gavin Newsom of California signed into law two bills protecting an artist's right to their digital likeness. AB 2602 requires contracts to specify the use of AI-generated digital replicas of a performer's voice or likeness, and the performer must be professionally represented in the negotiation of the contract. AB 1836 prohibits commercial use of digital replicas of deceased performers in films, TV shows, sound recordings and more, without first obtaining the consent of those performers' estates.
Two days later, Gov. Newsom signed three more AI bills related to deepfakes and generative AI transparency. SB 926 makes it illegal to create and circulate sexually explicit images of a person that appear real and cause the person "serious emotional distress." SB 981 mandates that social media platforms create ways for users to report sexually explicit deepfakes of themselves, after which the platforms must temporarily block the content while an investigation takes place. And SB 942 requires widely used systems that generate AI content to come with a provenance disclosure for users to more easily identify this type of content. These watermarks may be invisible to the naked eye but should be detectable by free tools offered together with the AI systems so that users can determine if the content is AI-generated.
Gov. Newsom also signed two bills aimed at curbing AI use in political campaign content. AB 2839 expands the time frame during which people and entities are prohibited from knowingly sharing election material containing deceptive AI-generated or manipulated content. AB 2355 requires election advertisements to disclose whether they use AI-generated or substantially altered content.
Several other AI bills await Gov. Newsom's signature, or veto, before the Sept. 30 deadline. One groundbreaking but controversial bill would set safety and security standards for the largest AI models in order to prevent catastrophes. SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, would require developers of "covered models" (which are AI models containing a specific level of computing power and that meet specific cost thresholds) and certain derivative models to, among other things:
- Implement cybersecurity safeguards, including written security protocols, the ability to promptly shut down the model and measures to prevent "critical harms" that may result in mass casualties or over $500 million in damage.
- Assess whether the model is capable of enabling critical harms, record the assessments and enact mitigation measures.
- Refrain from using or training the model if it may enable a critical harm.
- Retain third-party auditors to review a developer's compliance with applicable duties.
- Provide redacted copies of the written security protocols and third-party auditors' reports to the California attorney general and the public, and report all safety incidents to the attorney general within 72 hours.
The bill would create whistleblower protections for employees of AI developers who report noncompliance, prohibit price discrimination, and establish a new public computing cluster called "CalCompute" to help others access the technology and advance the development and deployment of AI.
Finally, some local governments are taking a proactive approach. New York City was one of the first local governments to regulate AI use in the employment context. Local Law 144 makes it unlawful for New York City employers and employment agencies to use AI to determine whether candidates should be selected or advanced in the hiring or promotion process unless (i) the AI tool is subject to a bias audit by an independent auditor before use and annually thereafter; (ii) the results of the most recent audit are published on the employer's website; and (iii) notice is provided to applicants and employees who are subject to AI screening at least 10 business days before use of the AI tool. Other cities, including Seattle, Boston and San Jose, California, have issued policies regulating how municipal employees may use AI on the job.
While this report provides a snapshot of much of the dynamic AI landscape, we will continue to track important issues surrounding corporate use of AI.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.