DOJ Is Asking Questions About How Companies Use AI. Do You Know The Answers?

In September, the DOJ announced it had updated its guidance to prosecutors on how to evaluate the effectiveness of a corporation's compliance program.
United States Corporate/Commercial Law

Companies' shoddy use of artificial intelligence can have serious legal, financial and reputational consequences for companies, and the DOJ has repeatedly made it clear that its investigators are taking a close look at how corporate America wields advanced tools like AI. Robert K. Hur, a former special counsel, Leah B. Grossi and Michael Galdo, all attorneys with King & Spalding, consider recent enforcement actions in light of the DOJ's September update to its compliance program guidance.

In September, the DOJ announced it had updated its guidance to prosecutors on how to evaluate the effectiveness of a corporation's compliance program. The guidance, known as the "Evaluation of Corporate Compliance Programs" (ECCP), provides prosecutors with factors to consider and questions to ask when determining the adequacy and effectiveness of a corporation's compliance program at the time of the offense and at the time of a charging decision or resolution with the department.

According to accompanying remarks given by the head of the department's Criminal Division, Principal Deputy Assistant Attorney General Nicole Argentieri, the additions to the ECCP were in three main areas: (1) emerging technologies, including artificial intelligence; (2) whistleblowers; and (3) access to data, including third-party vendor data. The specific focus on companies' use of artificial intelligence (AI) was particularly noteworthy.

Nine days after announcing the ECCP updates, Argentieri spoke again about AI and how the "promises and perils" of AI are "top of mind" for the Criminal Division and for the department more broadly. Referencing the need for robust detection of AI vulnerabilities, discriminatory impacts and bias, Argentieri announced that the Criminal Division would be updating its 2017 vulnerability disclosure framework to facilitate reporting consistent with the Computer Fraud and Abuse Act and intellectual property laws and urged companies to implement a vulnerability disclosure program to detect these issues within their AI systems.

This was just the most recent indication that the department is focused on the challenges and risks posed by AI. In February 2024, Attorney General Merrick Garland announced the designation of the department's first chief AI officer. That same month, Deputy Attorney General Lisa Monaco gave remarks announcing the creation of "Justice AI," a convening of stakeholders from civil society, academia, science and industry to better understand and prepare for the risks of AI. As part of Justice AI, the department's Criminal Division convened corporate compliance executives to help inform the department on how to update the ECCP to address the risks and uses of AI by companies and their compliance departments.

Last month's additions to the ECCP instruct prosecutors to ask a series of questions about AI (including generative AI) and emerging technologies to determine whether a corporation's compliance program is well-designed — a critical factor in deciding how to resolve criminal investigations of corporate conduct. The additions instruct prosecutors to consider whether a company has conducted a risk assessment regarding the use of new technologies, including AI, and whether the company has taken appropriate steps to mitigate the risk associated with the use of that new technology.

Questions for prosecutors and companies include:

  • Does the company have a process for identifying and managing emerging internal and external risks that could potentially impact the company's ability to comply with the law, including risks related to the use of new technologies?
  • How does the company assess the potential impact of new technologies, such as AI, on its ability to comply with criminal laws?
  • Is management of risks related to use of AI and other new technologies integrated into broader enterprise risk management (ERM) strategies?
  • What is the company's approach to governance regarding the use of new technologies like AI in its commercial business and in its compliance program?
  • How is the company curbing any potential negative or unintended consequences resulting from the use of technologies, both in its commercial business and in its compliance program?
  • How is the company mitigating the potential for deliberate or reckless misuse of technologies, including by company insiders?
  • To the extent the company uses AI and similar technologies in its business or as part of its compliance program, are controls in place to monitor and ensure its trustworthiness, reliability and use in compliance with applicable law and the company's code of conduct?
  • Do controls exist to ensure that the technology is used only for its intended purposes?
  • What baseline of human decision-making is used to assess AI?
  • How is accountability over use of AI monitored and enforced?
  • How does the company train its employees on the use of emerging technologies like AI?
  • Is there a process for updating policies and procedures to address emerging risks, including those associated with the use of new technologies?
  • What efforts has the company made to monitor and implement policies and procedures that reflect and deal with the spectrum of risks it faces, including changes to the legal and regulatory landscape and the use of new technologies?

The misuse of AI — like the creation of false approvals and documents — can have serious legal, financial and reputational consequences for companies. Department leadership has repeatedly warned that where misconduct is made significantly more dangerous by the misuse of AI, prosecutors will seek stiffer sentences. Monaco has also said that if the department determines that existing sentencing enhancements do not adequately address the harms caused by the misuse of AI, the DOJ "will seek reforms to those enhancements to close that gap."

The department's enforcement efforts relating to AI have already resulted in criminal actions. For example, last month, the U.S. Attorney's Office for the Southern District of New York secured a guilty plea from a former CEO and board chairman of a publicly traded digital advertising technology company for committing securities fraud by making material misrepresentations about the efficacy of the company's proprietary AI fraud detection tool. According to the DOJ's charging document, the securities fraud scheme included the creation of fake documents in order to mislead the independent certified public accountants who were engaged to audit the company's financial statements. Sentencing in that case is set for December 2024.

Civil and state enforcement authorities are also focused on AI. For example, last month, the Texas attorney general announced an assurance of voluntary compliance settlement with an AI healthcare technology company. Also in September, the California attorney general sent a letter to social media and AI companies that urged better identification and reporting of the use of AI to create deceptive content related to elections. In March 2024, the SEC announced charges against two investment advisers for "AI washing" by making false and misleading statements about their purported use of AI in their investment processes, and just this month, the commission charged an investment firm under similar allegations.

Taken together, these recent state and federal actions underline the need for care when it comes to adopting and implementing AI, including performing diligence of AI system providers.

Originally Published by Corporate Compliance Insights

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More