In Short
The Situation: On September 4, 2024, U.S. Securities and Exchange Commission ("SEC") Chair Gary Gensler reiterated concerns about artificial intelligence-related ("AI") disclosures and the need for companies to communicate accurately about their AI use and capabilities. Chair Gensler's comments follow several recent cases brought by the SEC, as well as numerous shareholder class action litigation suits, alleging misrepresentations around AI related to companies' business, revenue, and operations.
The Development: Chair Gensler's comments serve as an important reminder about the increased regulatory scrutiny on disclosures pertaining to AI. They also coincide with an increase in securities fraud class actions alleging false and misleading AI-related disclosures by public companies.
Looking Ahead: Given the increased focus on AI disclosures by the SEC and private civil litigants, companies should assess the accuracy of disclosures related to AI.
In a September 4, 2024, "office hours" video, SEC Chair Gary Gensler warned public companies, investment advisers, and broker-dealers against "AI washing," or making misleading statements related to their use of AI. These comments reinforce similar remarks from Chair Gensler, which we reported in December 2023.
In his latest comments, Chair Gensler noted the uptick in AI disclosures by public companies, reiterating that "securities laws still apply." Chair Gensler observed that investors benefit from disclosures about material risk from AI that are "particular to the company" rather than "boiler plate." He also stated that public companies may be required to define what exactly they mean when referring to "AI," such as "how and where it is being used at the company" and if it is "being developed by the company or supplied by others."
As to investment advisers and broker-dealers, Chair Gensler warned that if such companies "say[] they're using AI when they're not" or "say that they're using [AI] in a particular way and [are] not do[ing] so," they may run afoul of securities law. Chair Gensler's comments were followed by an October 2024 report issued by the SEC's Division of Examinations concerning its 2025 examination priorities, which confirms the agency's ongoing focus on AI-related disclosures.
The SEC has brought various cases this year against investment advisers for their statements regarding AI. In March, the SEC settled charges with Delphia (USA) Inc. and Global Predictions Inc. In both cases, the SEC found the investment adviser had neither developed nor implemented the AI capabilities they advertised. As to Delphia, the SEC alleged that the company's statements about using AI to analyze client spending and social media data, which it then incorporated in its investment algorithms, were false and misleading because Delphia did not use its clients' data or the AI as described. As to Global Predictions, the SEC alleged that the company's statement that its technology used expert AI-driven forecasts was false, as was the company's claim that it was the "first regulated AI financial advisor."
The SEC is currently accepting comments on proposed rules under the Securities Exchange Act of 1934 ("Exchange Act") and the Investment Advisers Act of 1940 that, if adopted, are intended to address conflicts of interest associated with broker-dealers' or investment advisers' interactions with investors through these firms' use of predictive data analytics technology.
In addition, in February 2024, the SEC settled charges with Rockwell Capital Management LLC and its founder for failure to execute on promises to start a hedge fund and predict price behavior using AI systems. Further, in a parallel action with the Department of Justice, the SEC filed a complaint against the founder and CEO of a recruiting startup that purported to deploy AI to match firms with diverse job candidates, alleging misstatements about the AI product's functionality and capabilities.
AI washing allegations have also increasingly been the focus of securities fraud class action litigation brought pursuant to Section 10(b) of the Exchange Act and Rule 10b-5. Such claims often focus on statements that allegedly exaggerated the defendant-company's AI technical capabilities. In one such case, the plaintiff alleged that Oddity Tech Ltd. "did not have viable AI technology" and that its statements about developing and validating AI technologies were false or misleading. Similarly, in a case against Upstart Holdings, Inc., plaintiffs alleged that statements regarding the safety and stability of the company's AI-powered lending platform were misleading after the platform was unable to withstand a higher interest rate environment. And in a case against Zillow Group, Inc., the court held that the plaintiff plausibly pled that the company failed to disclose challenges with its AI home pricing and that Zillow's references to the "sharpening" of AI pricing models were misleading.
Three Key Takeaways
- Both the SEC and private litigants are increasingly focused on companies' disclosures regarding AI technologies and their significance.
- Companies should accurately represent and disclose their AI use and capabilities and continue to evaluate whether updates to AI disclosures are needed for accuracy.
- Companies should not rely on boilerplate language about AI risks; rather, such disclosures should be specific to the risks they face, including risks to engineering and development of AI technology.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.