Fort Lauderdale, Fla. (September 30, 2024) - On September 23, 2024, in a speech to the Society of Corporate Compliance and Ethics, Principal Deputy Assistant Attorney General Nicole M. Argentieri emphasized that a corporation's compliance program must consider risks associated with the use of artificial intelligence (AI), both in commercial operations and the compliance program itself.
Argentieri's remarks, part of a discussion of the U.S. Department of Justice's revised Evaluation of Corporate Compliance Programs (ECCP), places corporations on notice that an adequate and effective corporate compliance program must include an assessment of AI use and risk. Specifically, AI risk management must be fully integrated into broader compliance frameworks, not treated as a standalone issue.
Argentieri's comments are part of a steady effort by the DOJ this year to telegraph its focus on risks associated with AI. As such, corporations and other entities that continue to ignore such DOJ guidance do so at their own peril.
DOJ's Updated Compliance Guidance on AI
The revised ECCP makes clear companies must now assess how AI technologies could facilitate criminal activities, such as generating false documentation or approvals, and whether adequate controls are in place to prevent these risks. DOJ prosecutors will closely examine how companies are managing AI risks, focusing on three key areas:
- Risk Assessments: Companies must evaluate how
AI tools could be exploited for fraud or misconduct and implement
robust controls to mitigate these risks.
- Data Accuracy and Reliability: The DOJ expects
compliance programs to validate the accuracy of AI-generated data
to prevent fraud or errors.
- Ongoing Monitoring: Companies should continuously test and monitor AI systems to ensure compliance with internal standards and codes of conduct.
Disparities in Data Usage
In tandem with assessing a corporation's AI compliance policy, Argentieri emphasized that the DOJ will scrutinize whether compliance teams have the same level of access to data and analytics tools as other business functions. Gaps between data-driven business operations and under-resourced compliance functions will raise red flags. As such, corporations should ensure different corporate stakeholders —Compliance, IT, Cyber — resist staying in their respective silos without engaging in effective collaboration, including through the use of data analytics tools.
Whistleblower Protections and AI
In addition to AI compliance, Argentieri highlighted the DOJ's expanded focus on whistleblower protections. The updated ECCP includes specific questions designed to evaluate whether companies are encouraging employees to report misconduct, especially in relation to AI-related risks. The DOJ is positioning whistleblowers as a crucial line of defense, encouraging reports of AI misuse or compliance failures.
Companies are now being evaluated on their commitment to fostering a "speak-up" culture. Those that fail to protect whistleblowers from retaliation may face significant penalties. The whistleblower program, part of the Corporate Whistleblower Awards Pilot Program (CWA), incentivizes employees to report misconduct and rewards companies that take proactive steps in response.
Key Takeaway: AI Audits Are Critical
The DOJ's updated guidance sends a clear signal: companies must conduct comprehensive AI risk assessments/audits within their compliance programs. AI tools should be rigorously audited, with particular attention to risks like data reliability, fraud, and false documentation. Swift action is needed as AI becomes more deeply integrated into corporate operations, and the DOJ focuses on responsible AI management.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.