On February 14, 2024, U.S. Department of Justice ("DOJ") Deputy Attorney General Lisa Monaco ("DAAG"), the second in command at the U.S. Department of Justice, announced to an audience at Oxford University a key development in how the DOJ and its prosecutors plan to address the dangers posed by AI technology. DAAG Monaco likened the use of AI in the commission of a crime to the use of a weapon, calling it a "sword," and characterizing its misuse as "dangerous." She stated, "Like a firearm, AI can also enhance the danger of a crime."
Because she characterized AI as a "sharp[] blade" that can be wielded by criminals who would use it to commit crimes ranging from election fraud to cyber warfare, DAAG Monaco announced that DOJ prosecutors may now seek sentencing enhancements for crimes committed using AI technology. Federal prosecutors have long been required to calculate proposed sentences by consulting the United States Sentencing Commission Guidelines Manual ("USSG") as the first step in recommending a sentence for convicted criminals. The USSG contains certain enhancements that can be added to a base offense level for each federal crime. DAAG Monaco referenced enhanced penalties that can be sought for the use of a gun in the commission of a crime when explaining that prosecutors may now seek similar enhancements when AI is used to commit a crime.
While there currently do not exist any enhancements in the USSG specifically referring to the use of AI, prosecutors currently could potentially seek an enhancement using USSG § 2B1.1(b)(10)(c) for use of "sophisticated means." In addition, prosecutors could also recommend that a court impose a more severe sentence for an AI-using defendant who also:
- "may have misused special training or education to facilitate criminal activity," USSG § 5H1.2; or
- may have used a "special skill" that is not possessed by members of the general public, USSG § 3B1.2.
DAAG Monaco also stated that if existing advisory sentencing enhancements are deemed inadequate to address the harms caused by AI, the DOJ is committed to "seek reforms to those enhancements to close that gap."
In addition to stiffer penalties sought by prosecutors for crimes committed with the misuse of AI, DAAG Monaco also referenced other initiatives of the DOJ in accordance with President Biden's Executive Order on Safe, Secure, and Trustworthy AI announced on October 2023.
Domestically, the DOJ is partnering with other federal agencies to create guidance and controls regarding the use of AI in the U.S., to ensure the use of AI does not threaten the safety or legal rights of U.S. residents. On the international front, DAAG Monaco highlighted the Hiroshima AI process, an international initiative launched in May 2023 for the purpose of discussing the opportunities and risks of AI technology. The Hiroshima AI process issued its "Comprehensive Policy Framework," the "first international framework with guiding principles and a code of conduct designed to promoting 'safe, secure and trustworthy advanced AI systems.'" Additionally, DAAG Monaco noted that AI technology will be a top priority of the Disruptive Technology Strike Force, which was launched in 2023 to use export control laws to ensure international adversaries are not able to misappropriate cutting-edge American technology.
Finally, DAAG Monaco announced that in January 2024, the DOJ had appointed its first "Chief AI Officer," who will lead an initiative referred to as "Justice AI" to solicit opinions from within the DOJ, foreign counterparts, and private experts on the responsible and ethical uses of AI and how to guard against the risks associated with the technology. The Justice AI initiative will culminate with a report to President Biden at the end of 2024.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.