U.S. Commerce Department Reverses Course on Biden AI
Diffusion Rule
Jason Wilcox
On May 13, the U.S. Commerce Department officially declared its
intent to rescind and replace the AI Diffusion Rule issued during
the final weeks of the Biden administration. Slated to go into
effect on May 15, the rule had drawn considerable criticism from
both domestic and international stakeholders.
Commerce argued that the AI Diffusion Rule would have hampered American innovation by creating unnecessary regulatory hurdles for companies. It also expressed concern that the rule's approach would have negatively impacted U.S. diplomatic relations by effectively categorizing dozens of countries, including key allies in the EU, Saudi Arabia, and the UAE, as "second tier" for AI technology access. These nations had voiced their own concerns about the rule's proposed limitations on AI chips.
In lieu of the Biden rule, Commerce is promising a new, "bold, inclusive strategy" focused on fostering collaboration with trusted international partners while safeguarding AI technology from U.S. adversaries.
While the specifics of this new strategy and the timeline for
its implementation are unclear, Commerce has already taken interim
steps to bolster U.S. export controls on semiconductors worldwide.
These include providing guidance on the risks associated with using
certain Chinese-made AI chips (notably Huawei Ascend) and issuing
an advisory on the potential misuse of U.S. AI chips for training
and inference of Chinese AI models. Commerce also issued specific
recommendations for securing supply chains against diversion
tactics.
Commerce emphasized that these actions are part of a broader effort
to ensure that the U.S. remains a leader in AI innovation.
AI Training & Copyright—Highlights From the New
USCO Report
Joe Cahill
The U.S. Copyright Office recently released its third major report
on artificial intelligence, tackling the critical issue of using
copyrighted works to train AI models. The report confirms that
copyright infringement can occur at various stages, from acquiring
training data to the AI's outputs, and notably clarifies that
AI model weights themselves may constitute infringing copies if
they "memorize" protected material. While the Office
suggests that training AI on diverse datasets "will often be
transformative," it stresses that this is not a blanket
approval for fair use; such determinations will hinge on the
model's specific purpose, deployment, and whether its outputs
compete with the original works.
The report further refines the fair use debate by rejecting arguments that AI training is inherently "non-expressive" or analogous to human learning. It also highlights concerns about "market dilution," where the sheer volume of AI-generated content could devalue original creations. On licensing, the Copyright Office currently advises allowing the market to evolve without immediate government intervention, supporting voluntary agreements. Ultimately, the guidance underscores that the legality of using copyrighted material for AI training remains a complex, case-by-case assessment, signaling ongoing challenges and developments for both AI innovators and copyright holders.
A link to the report can be found here.
Another AI Avatar in Court – Now as a Generated
Victim Statement
Ben Bafumi*
A recent criminal case in Arizona involving a fatal road-rage
incident, State v. Horcasitas, took a novel turn when an avatar of
the deceased victim read his own impact statement in a posthumous
AI-generated video. This video showed the deceased victim,
Christopher Pelkey, opening with "Chris" stating that the
video is AI-generated, then continuing on to state his strong
beliefs in forgiveness and compassion for the defendant. While the
video's script was not written by the victim himself, it was
accompanied by a real video of Pelkey from 2021 and nearly fifty
additional statements and letters from his family and friends
echoing the AI-generated avatar's message and affirming his
beliefs.
Victim impact statements are common in criminal cases, and in Arizona, can be shown in any digital format if they are made in good faith. Although in this instance the party was transparent about the AI-generated nature of the video and provided a wealth of content supporting the deceased victim's views as expressed by the avatar, the victim's sister, who created the video, warned of an AI-generated victim statement's potential for abuse, admitting that she could have been "very selfish with [the video]."
Importantly, the AI-generated video's admissibility and impact in this proceeding is limited, since it was shown in front of a judge (and not a jury) at a sentencing hearing after the prior conviction of the defendant. However, the Judge openly praised the video, and is quoted as saying he "loved the AI," felt "[the victim's forgiveness] was genuine," and appreciated that "[the victim's family] allowed [the victim] to speak from his heart." On appeal, a court may consider whether the judge's weight of this video was improper.
This news comes on the back of the recent, similar case in New York, where a judge disapproved of a pro-se plaintiff's use of an AI-generated avatar to present his argument in court, as discussed in our prior AI Legal Watch newsletter. It seems courts across the country are grappling with the use of generative AI in proceedings, especially when parties are representing one's likeness and opinions. No clear trend has yet emerged, but it is to be expected that a judge's beliefs on generative AI, the level of AI's contribution, and the stage of the proceedings will likely be determinative factors in the ability to use such videos successfully.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.