- in United States
- with readers working within the Advertising & Public Relations industries
- within Immigration topic(s)
Introduction
The rapid evolution of generative artificial intelligence ("GenAI") — capable of autonomously producing text, images, and video — is nothing short of astounding. Today, GenAI systems can draft legal briefs, compose music, design architectural plans, generate photorealistic artwork, and even write functional computer code — tasks that previously required years of specialized training and expertise. And while experts tout GenAI's potential to spur massive productivity growth, this technology is already being exploited for malicious purposes.
In short, GenAI works by learning the patterns and structures of its training data to generate new outputs that mimic and build on those patterns. These models are trained on enormous datasets and contain potentially trillions of parameters — variables whose weights are adjusted during training. As a result, GenAI models are incredibly complex and often described as "black boxes," with behaviors that are, at least at present, largely unexplainable and, consequently, unpredictable.1
Users may experience this unpredictability in at least two ways. At the output level, models may — and often do—respond differently to the same inputs, such that a particular output is not necessarily reproducible. At the system level, models may engage in behaviors or exhibit capabilities that are unexpected to the models' creators or users. And both types of unpredictability may be exacerbated by models interacting with other dynamic systems (e.g., other models).
This high degree of complexity and unpredictability has, among other factors, set GenAI apart from prior technological inventions, blurring the line between tool and seemingly independent actor. Traditionally, criminal law targets a technology's operator as the responsible party. But GenAI challenges this framework: the technology itself may exhibit a form of pseudo-agency, generating outputs that diverge from its creator's inputs and even from its creator's intent. And the rise of agentic AI — systems capable of autonomously planning and executing multi‑step actions — intensifies this challenge by pushing these technologies even closer to functioning as independent actors, thereby heightening the urgency of determining how criminal liability should attach when harms result.2 These issues are novel, and it remains uncertain how prosecutors and courts will respond. What we can do, however, is begin to articulate a practical framework for assessing criminal liability arising from the use — or misuse — of GenAI.
This article aims to provide practitioners with a clear framework for analyzing criminal exposure related to GenAI. It begins by reviewing foundational criminal law doctrines, including primary and secondary liability. Next, it examines how these doctrines have been applied to evolving technologies and highlights relevant guidance issued by the U.S. Department of Justice (DOJ). Finally, it considers how prosecutors and courts may adapt these principles to address criminal liability in the GenAI context going forward.
I. Criminal Law 101: Primary and Secondary Liability
At its core, criminal law deals with conduct that society deems morally blameworthy and deserving of punishment. And because the punishment can be severe — ranging from fines and loss of liberty to, in rare cases, loss of life — society typically reserves criminal punishment for those who have deliberately engaged in prohibited conduct, rather than for those whose actions are merely inadvertent or accidental (with certain exceptions).
As such, criminal liability generally requires proof of two essential elements: a wrongful act ("actus reus") and a culpable mental state ("mens rea"). The actus reus refers to the physical act or omission that constitutes the offense, while mens rea addresses the defendant's state of mind — such as intent, knowledge, or recklessness — at the time of the act. In most cases, both elements must be present for conduct to be considered criminal.3
In applying these principles, criminal law distinguishes between primary and secondary liability. Primary liability attaches to individuals who personally commit the criminal act — for example, an individual who commits an assault. Secondary liability, by contrast, extends responsibility to those who assist, encourage, or conspire with others to commit a crime, such as an accomplice who helps to plan a burglary or a co-conspirator who provides resources to execute a fraud scheme.
For primary liability — where the accused committed the criminal act or omission — the mens rea requirement is generally that the defendant acted either "knowingly" or "willfully." Most ordinary criminal statutes require proof that the defendant acted knowingly, meaning that the defendant was aware of the facts that made his or her conduct criminal, even if the defendant did not know that his or her actions were unlawful. In certain regulatory contexts, such as copyright or export controls, the law may require proof that the accused acted willfully: that is, the defendant not only knew the facts that made their conduct unlawful but also that the defendant acted with the intent to do something the law forbids, knowing that the act was unlawful. Prosecutors can establish a defendant's mental state through direct or circumstantial evidence, and in some cases, through willful blindness — where a defendant deliberately avoids learning facts that would make their conduct unlawful.
For secondary liability — where the accused is generally a step removed from direct commission of the criminal act — the mens rea requirement is typically higher, requiring specific intent. What that means differs somewhat across different types of secondary liability. For aiding and abetting liability, the law typically requires proof that the defendant intentionally took an affirmative act to further the unlawful conduct and did so with the purpose of seeing the crime succeed. Mere knowledge or passive involvement is not enough; the accused must consciously and culpably participate in the wrongdoing. For conspiracy, prosecutors must show that the defendant knowingly joined an agreement to commit an unlawful act, with the specific intent that the underlying crime be carried out. In both cases, courts demand a heightened showing of intent, ensuring that secondary liability does not arise from inadvertent or merely negligent behavior, but only from deliberate participation in criminal activity.
A fundamental challenge in determining criminal liability when it comes to GenAI is that it blurs the line between tool and independent actor. Traditional criminal law is built on the premise that there is a human agent who can be held responsible for wrongful acts. But GenAI's complexity and unpredictability mean that harm can occur without a clear, blameworthy individual in the traditional sense (even if all of the facts are known — which is the most frequent obstacle to allocating blame traditionally). In the context of primary liability, where the law typically requires proof that a defendant acted knowingly or willfully, GenAI's unpredictability arguably shields model creators and developers from liability for harms caused by their creations. Similarly, in the context of secondary liability, where the law generally requires proof of specific intent to further or commit a crime, GenAI's versatility and widespread legitimate uses make it difficult to show that developers or providers intended their tools to be used unlawfully.
II. How Criminal Law May Adapt to the Challenges of GenAI
The difficulty of assigning liability when it comes to GenAI creates a potential accountability gap: when an AI system generates harmful outputs autonomously, there may be real victims and real damage, but no obvious person to punish.4This dilemma is not entirely new, however, and past cases are instructive as to how prosecutors and the courts may adapt traditional criminal law concepts to GenAI harms. In this section, we review prior analogous cases, as well as key DOJ guidance, in order to examine how criminal law might evolve to address the unique risks posed by GenAI, ideally by regulating harmful behavior while still enabling its considerable opportunities by clarifying developers' and users' obligations and potential exposure.
A. Primary Liability: When is Harm Sufficiently Predictable?
A recurring challenge in criminal law is determining when creators or manufacturers of technology should be held criminally responsible for the harms their products cause. The central question is often one of predictability: did the decision-makers have reason to foresee the risk, such that they should be accountable for consciously failing to address it? Prosecutors, courts, and juries frequently look to whether the harm was a natural and probable consequence of the design or deployment, and whether the defendant's conduct rose to the level of recklessness or willful blindness. In these cases, criminal liability may attach even absent intent to harm, relying instead on lessened mens rea standards such as recklessness or criminal negligence.
This principle was at the heart of the famous State v. Ford Motor Co. case, in which Indiana prosecutors charged Ford with reckless homicide following a Pinto fuel‑tank fire that killed three passengers.5 The theory was that Ford's executives knew about the design flaw and the substantial risk it posed but consciously disregarded those dangers in favor of cost savings. Ford was ultimately acquitted by a jury that did not find this knowledge and inaction to rise to the level of criminal recklessness, indicating a high bar for primary liability in these circumstances. Nonetheless, the case established that criminal liability could be pursued where there is evidence of awareness of grave risks and a substantial deviation from accepted safety standards. The case remains a touchstone for how courts analyze corporate recklessness in the face of predictable but uncertain harm.
More recently, the limits of criminal liability for unpredictable technological harm were tested in the aftermath of a fatal crash involving an autonomous vehicle in Tempe, Arizona.6 In that case, a self-driving car struck and killed a pedestrian while the car was operating in autonomous mode. After investigation, prosecutors declined to bring criminal charges against the car company or its engineers, citing "no basis for criminal liability."7 Instead, charges were brought against the backup driver, who was found to have been distracted by a streaming device at the time of the crash.8 The decision not to prosecute the car company or its engineers illustrates how, absent clear foreseeability or disregard of known risks, criminal liability for creators of autonomous technology remains difficult to establish, regardless of the severity of the relevant harm.
By contrast, a California case, People v. Knoller, demonstrates how recklessness can be sufficient for criminal liability when the facts support it.9 In that case, the defendant, Marjorie Knoller, and her husband owned two Presa Canario dogs — large, powerful animals with a history of aggression and violence — that ultimately killed a neighbor in the hallway of their apartment building. Prior to the fatal attack, Knoller was repeatedly warned about the dogs' dangerous propensities: she was told the dogs had killed livestock, attacked other animals, and posed a liability in any household. Neighbors and professionals advised her to muzzle and train the dogs, but these warnings were disregarded. Over the course of their ownership, there were approximately 30 incidents of the dogs being out of control or threatening humans and other dogs, including biting a neighbor, lunging at a pregnant woman, and attacking other pets. Knoller herself admitted she lacked the strength to control the dogs and was aware of their aggressive behavior, yet she continued to walk them unmuzzled through her apartment building, exposing residents to substantial risk and ultimately resulting in a neighbor's death.
Knoller was charged with second degree murder, involuntary manslaughter, and possession of a mischievous animal that caused death, and was convicted on all counts by a jury. The trial court subsequently granted the defendant a new trial on the second-degree murder charge, taking the position that, "to be guilty of that crime, Knoller must have known that her conduct involved a high probability of resulting in the death of another." On appeal, the Court of Appeal reversed the trial court's order granting a new trial, holding that "implied malice can be based simply on a defendant's conscious disregard of the risk of serious bodily injury to another." The California Supreme Court, in turn, granted Knoller's petition for further review and clarified that "implied malice requires a defendant's awareness of engaging in conduct that endangers the life of another—no more, and no less." Although this case ultimately turned on the statutory interpretation of California's penal code, it serves as a cautionary tale: when decision makers are aware of specific dangers and fail to take reasonable steps to prevent harm, criminal liability may attach — even if there was no intent to harm.
Taken together, these cases illustrate that criminal liability for creators of technology may hinge on the predictability of specific harm and the defendant's response to warning signs. Where there is evidence of knowledge, conscious disregard, or willful blindness to specific risks, prosecutors and courts may be willing to apply lessened mens rea standards and impose criminal sanctions without demonstrable intent to harm. But where harm is unforeseeable or the connection to the creator is too attenuated, attributing criminal liability remains a high bar.
B. Secondary Liability: Is Knowledge and Indifference Enough?
When GenAI is used by third parties (meaning not the developers or deployers but instead some user of the technology) to commit crimes, prosecutors will face significant hurdles in establishing that developers or providers possessed the specific intent required for secondary liability. The law demands more than mere knowledge of potential misuse; it requires proof that the accused intended to assist or further the criminal conduct. Nevertheless, recent prosecutions illustrate how the DOJ has sought to satisfy — or, in some cases, adjust — the boundaries of traditional doctrines to address technology-enabled offenses.
In 2018, the DOJ seized Backpage.com, which it described as "the Internet's leading forum for prostitution ads," and indicted seven individuals involved in Backpage's creation and management.10 These individuals were charged with, among other things, facilitating prostitution under the Travel Act, 18 U.S.C. § 1952(a)(3), which requires proving defendants acted with specific intent — that is, the intent to "promote, manage, establish, carry on, or facilitate the promotion, management, establishment, or carrying on, of any unlawful activity." As laid out in the indictment, the government tried to establish the requisite intent by proving defendants had knowledge of criminal conduct and purposefully facilitated it, including by:
- Systematically editing and publishing ads for prostitution, including removing explicit sex-for-money language or images, then approving and posting the ads to the site.
- Coaching advertisers and users on how to evade Backpage's moderation efforts, including advising them on how to rewrite ads to conform to Backpage's publication standards and continue advertising illegal services.
- Implementing policies to limit reporting and referrals of child exploitation, including artificially capping the number of alerts sent to the National Center for Missing and Exploited Children (NCMEC) and declining to adopt NCMEC's recommended safeguards.
- Designing and expanding Backpage's business model to profit from prostitution ads, including actively pursuing the market after key competitor Craigslist shut down its adult section, and using offshore contractors to create and promote prostitution ads internationally.
Ultimately, two Backpage officers (Carl Ferrer, the CEO and co-founder, and Dan Hyer, the sales and marketing director) pleaded guilty and testified against the other defendants; in turn, three other officers (co-founder Michael Lacey, executive vice president Scott Spear, and chief financial officer John Brunst) were tried and convicted.11 Although the Travel Act convictions did not technically involve secondary liability, the specific intent requirement under the Travel Act is similar to that required for proving aiding and abetting. In both contexts, prosecutors must prove that the defendant purposefully sought to facilitate or promote the underlying unlawful activity, rather than merely acting with general awareness or passive involvement.12
And while the government was able to prove specific intent based on knowledge plus purposeful participation in the Backpage matter — where the evidence was egregious — that is not always the case. For example, in 2014, the DOJ indicted FedEx on charges related to its shipment of pharmaceuticals on behalf of online pharmacies, including distribution of controlled substances (21 U.S.C. § 841) and conspiracy to distribute controlled substances (21 U.S.C. § 846).13 According to the indictment, FedEx allegedly shipped controlled substances for illegal internet pharmacy organizations (notably the Chhabra-Smoley Organization and Superior Drugs) that distributed drugs based solely on online questionnaires, without valid prescriptions or legitimate medical purpose. The government alleged that FedEx was notified multiple times by the DEA, FDA, and Congress that these practices violated federal law and that FedEx's senior management was aware that prescriptions issued without a physician's examination were invalid. Despite these warnings, FedEx allegedly maintained business relationships with known illegal pharmacies and continued shipping even after learning of indictments, arrests, and convictions of pharmacy owners and operators.
Unlike its competitor UPS Inc., which settled similar allegations with the DOJ for $40 million, FedEx pleaded not guilty and proceeded to a bench trial in front of U.S. District Judge Charles Breyer in San Francisco.14 Then, in a surprising development, the DOJ voluntarily moved to dismiss its charges against FedEx four days into what was scheduled to be a multi-week trial. Although the DOJ did not explain its request for the dismissal, the decision came after Judge Breyer had asked prosecutors to present testimony from two DEA agents with whom FedEx had communicated regularly, but never told the company to stop shipping for any online pharmacies.15 In the lead-up to trial, Judge Breyer reportedly had also expressed skepticism of the DOJ's case, and particularly that prosecutors could prove that FedEx knew of the illegal drugs and intended them to be distributed illegally.16
Taken together, these cases clarify the overall standard for secondary criminal liability: knowledge alone — even when combined with indifference or assistance — is generally insufficient. Rather, prosecutors and ultimately courts require evidence of purposeful participation or assistance, meaning conduct that is intended to further or facilitate the criminal activity, rather than merely tolerating or failing to prevent it. This conclusion is also supported by the Supreme Court's recent decision in Twitter, Inc. v. Taamneh, 598 U.S. 471 (2023), which held that, in the civil context of the Antiterrorism Act, establishing civil liability for aiding and abetting requires more than providing generic services that are later misused by bad actors; it instead demands "conscious, voluntary, and culpable participation in another's wrongdoing."17 Mere knowledge that a platform is being used for unlawful purposes, or failure to take more aggressive action to prevent misuse, is not enough. Instead, there must be evidence that the defendant affirmatively and purposefully assisted the specific wrongful act.
To determine whether participation or assistance was purposeful, "finders of fact" — the judge or jury responsible for determining what actually happened — look for indirect evidence that reveals the defendant's intent. From examining the Backpage and FedEx cases, there appear to be at least three salient aspects of the conduct in question that contributed to the different outcomes in the matters:
- The granularity of the assistance.
- Whether the defendant's assistance primarily facilitated criminal conduct, as opposed to other benign use cases.
- The degree to which the defendant shaped its product and services around the alleged criminal conduct.
Granularity of Assistance. As a defendant provides increasingly detailed and targeted support for unlawful conduct, a finder of fact (here meaning initially a prosecutor considering charges and then a judge or jury scrutinizing those charges) is increasingly likely to find that the defendant intended to assist the crime. In the Backpage matter, for example, company employees did not simply allow illegal ads to be posted — they actively edited and sanitized ads to help users evade detection, maintained lists of code words for prostitution, and coached advertisers on how to rewrite ads to avoid content moderation. These granular interventions demonstrated a level of engagement and intent that far exceeded mere tolerance or indifference. By contrast, FedEx's conduct was limited to continuing to ship packages for online pharmacies, even after receiving warnings; FedEx did not tailor its services to help pharmacies to evade law enforcement or directly assist in the logistics of illegal drug distribution.
Criminal vs. Benign Beneficiaries. Businesses are generally financially motivated to help people in a manner profitable to those businesses. As such, helping criminals is not, on its own, generally sufficient to show that a defendant intended to further a criminal act and, where a business treats criminals in the same way in which it treats non-criminals (i.e., it provides them with the same services), it will be difficult for a finder of fact to conclude that the business intended to assist the commission of a crime. This distinction appears to have been central in the FedEx case, where the company provided shipping services to online pharmacies in the same manner as it did for its other customers, without tailoring its offerings or providing special assistance to those engaged in illegal activity. In contrast, Backpage's assistance was specifically directed at those engaged in unlawful conduct: the platform's employees actively edited and sanitized ads to help users to evade law enforcement and, moreover, provided guidance that was useful only to those seeking to advertise illegal services. This targeted support for criminal beneficiaries, rather than equal treatment of all users, makes it easier for a finder of fact to conclude that the assistance was intended to benefit criminals and their criminal behavior and, therefore, that criminal culpability should apply.
Business Strategy. Finders of fact are also likely to examine whether a company's strategic decisions, product choices, and market positioning reflect an intent to support criminal activity. When a business deliberately designs features or tailors its offerings to attract or retain users engaged in unlawful conduct, intent becomes easier to infer. In the Backpage case, internal communications and product decisions were aimed at maximizing profits from prostitution ads, and the platform's features were specifically adapted to facilitate illegal transactions. By contrast, FedEx's business strategy centered on legitimate package delivery, and its shipments for online pharmacies represented only a small fraction of its overall operations and market share. There was no evidence that FedEx's product design or marketing was intended specifically to support illegal drug distribution, revealing a business strategy less suitable for attaching criminal culpability to the company's actions (and inactions).
Applying these lessons to GenAI makes clear that establishing secondary liability in this area will turn on far more than identifying a provider's mere awareness of potential misuse. Just as in Backpage and FedEx, prosecutors examining GenAI‑related misconduct will look for concrete evidence that a company deploying or offering a GenAI system purposefully shaped its tools, policies, or business practices in ways that meaningfully assisted criminal activity — whether by configuring systems to make prohibited content easier to generate, providing guidance that helps users evade safeguards, or cultivating user bases disproportionately engaged in unlawful conduct. By contrast, when a GenAI developer or service provider offers a general‑purpose system used predominantly for legitimate ends, and its conduct reflects ordinary commercial operations rather than purposeful facilitation, the demanding specific‑intent standard for secondary liability will remain difficult to satisfy. In short, traditional doctrines point toward a narrow path for secondary liability in the GenAI setting — one that focuses not on knowledge or indifference, but on affirmative choices by GenAI providers that materially help to bring about criminal misuse.
C. DOJ Guidance
Consistent with our analysis that primary and secondary criminal liability set high bars for GenAI developers and operators, recent guidance from the DOJ guidance — specifically the remarks delivered by Acting Assistant Attorney General for the Criminal Division Matthew R. Galeotti at the American Innovation Project Summit in August 2025 — offers additional perspective on how prosecutors may approach criminal liability in the context of emerging technologies.18 In that speech, Galeotti addressed DOJ's approach to criminal enforcement in the digital asset space — a context with striking parallels to the challenges posed by GenAI. Galeotti reassured listeners that, "[g]enerally, developers of neutral tools, with no criminal intent, should not be held responsible for someone else's misuse of those tools."19 Rather, Galeotti emphasized that DOJ's charging decisions hinge on criminal intent, stating unequivocally that "merely writing code, without ill-intent, is not a crime." He further reaffirmed that secondary liability, such as aiding and abetting or conspiracy, requires specific intent and, thus, "if a developer merely contributes code to an open-source project, without the specific intent to assist criminal conduct, aid or abet a crime, or join a criminal conspiracy, he or she is not criminally liable."
Although Galeotti's remarks were directed at the digital asset ecosystem, he clarified that DOJ's approach did not amount to "a different level of scrutiny" for digital assets only. Rather, the "law is technology neutral," Galeotti said, and "[c]riminals will be prosecuted, whether their tools are old or new." Given this generalized guidance — and the similarities between GenAI systems and decentralized finance platforms,20 which are both multipurpose technologies that can be used for legitimate as well as illegitimate ends — DOJ currently appears less likely to pursue criminal charges against GenAI developers or providers absent clear evidence of criminal intent.
III. Framework for GenAI Criminal Liability
In the context of GenAI, criminal liability may potentially attach to several distinct categories of actors. The first and most straightforward is the user — the individual who interacts with the GenAI system, such as the person holding a phone and entering a prompt into a chatbot-based platform. If a user employs GenAI to assist in committing a crime, such as soliciting instructions for illegal activity or generating unlawful content, the legal analysis of that user's liability is essentially unchanged by the involvement of AI. The user's criminal responsibility turns on whether the user committed a criminal act with the requisite intent; GenAI's assistance does not alter this analysis, just as the involvement of an accomplice (meaning a real, in-the-flesh human accomplice) would not diminish the principal's liability.
Beyond users, criminal liability may also potentially attach to a broader group, which this article will refer to generally as patrons. Patrons include creators, operators, and integrators, which we define in this article as follows:
- Creator refers to a party who created a GenAI model, deciding, among other things, what data set to train the model on and how to fine-tune it, if at all. For example, creators would include researchers or companies developing new models, either from scratch or based on some open source starting point.
- Operator refers to a creator of a GenAI model who also makes the model available to users through some proprietary interface, which allows the operator both further control over the models' conduct and the ability to monitor the model-user interactions. For example, a company that builds a foundational model and then hosts it through a commercial chatbot interface would qualify as an operator, in addition to being a creator.
- Integrator refers to a non-creator who integrates a GenAI model into their system for the purpose of providing some service to its users. As such, integrators — like the operators — have additional control over the model's conduct (at least with respect to the end users) and visibility into the model-user interactions. For example, integrators would include a bank that embeds a third-party GenAI model in its customer service chatbot or a software platform that uses an external model to power document summarization or coding assistance.
For these groups, primary liability may arise for the patrons' conduct in creating a model that causes real-world harm (implicating creators and operators) and secondary liability may arise from a third party's (mis)use of those models (implicating operators and integrators). We analyze the specific risk of each in turn in light of the principles established above.
A. Primary Criminal Liability for Developing GenAI
Given their involvement in training and developing the AI model, creators and operators could theoretically face the possibility of primary criminal liability if those models cause harm. Assuming that the creators and operators did not intend to cause the harm, the key question for prosecutors and courts will likely be whether these actors had sufficient knowledge of the risks their models posed and whether their response to those risks — through action or inaction — amounted to criminal recklessness or willful blindness.
While the Ford Pinto prosecution established that criminal charges may be brought against technology developers even absent an intent to harm, prosecutors' ultimate failure to secure a conviction — and the subsequent reluctance of prosecutors in cases like the recent autonomous vehicle fatality — demonstrate that the bar for attributing to product developers primary criminal liability remains quite high. In the Pinto case, prosecutors argued that Ford's executives knew of a grave design defect but prioritized cost savings over safety. Yet, the jury was not convinced that this knowledge and inaction rose to the level of criminal recklessness. Similarly, in the autonomous vehicle case, and despite a tragic death, prosecutors declined to charge the self-driving car company or its engineers, explicitly finding "no basis" for criminal liability. These cases underscore that, for GenAI developers, mere awareness of generalized risks or the possibility of misuse is unlikely to trigger criminal liability. Rather, criminal law demands a much more direct and culpable connection between the developer's conduct and the resulting harm.
To the extent criminal law's high bar can be met, People v. Knoller illustrates just how egregious the facts may need to be to satisfy the prosecution's burden. In Knoller, the defendant was repeatedly warned — by neighbors, professionals, and even her own veterinarian — about the extreme danger posed by her dogs, which had a documented history of aggression and violence. Despite dozens of prior incidents, including violent attacks against people and animals, the defendant continued to expose others to risk, disregarding explicit advice to muzzle or train the dogs. The California Supreme Court found that this pattern of conscious disregard, in the face of specific and repeated warnings, was sufficient for a finding of implied malice and criminal liability. Translating this to GenAI, liability would likely require a similarly direct and specific connection: for example, a developer who receives repeated, credible warnings that the developer's model is being used to facilitate specific, imminent, serious harm, and who nonetheless affirmatively chooses not to implement feasible, widely recognized safeguards, could potentially face criminal exposure. The facts likely would need to show not just abstract awareness but a pattern of ignoring concrete, actionable risks tied to the developer's own decisions.
As such, the overall risk of primary criminal liability for GenAI developers is currently likely low. However, as GenAI systems become more powerful and their risks better understood, the expectations for reasonable precautions will rise — and aggressive prosecutors may try to bring charges for lower forms of mens rea, such as deliberate indifference, in part to attempt to establish new precedent for a new and rapidly emerging technology. Developers who ignore mounting evidence of misuse, fail to implement widely recognized safeguards, or actively suppress internal warnings may find themselves exposed to criminal liability — not because they intended harm, but because they consciously disregarded the substantial risks their creations posed. In this evolving landscape, the best protection against liability is a demonstrable commitment to risk assessment, transparency, and continuous improvement in safety practices.
B. Secondary Criminal Liability for Operating GenAI
Given their central role in deploying and managing GenAI systems — and particularly because they may have visibility into the use of those systems — operators and integrators could conceivably face secondary criminal liability if their platforms are used to facilitate unlawful conduct. However, the law requires more than mere awareness of potential misuse. For operators and integrators, the critical question is whether their actions went beyond providing a general-purpose tool and instead amounted to purposeful assistance or encouragement of criminal activity. Courts and prosecutors will look for evidence that these actors took specific steps to make their models more useful for unlawful ends, rather than simply failing to prevent misuse. As Acting Assistant Attorney General Galeotti recently emphasized, "developers of neutral tools, with no criminal intent, should not be held responsible for someone else's misuse of those tools"— underscoring that intent remains the touchstone for criminal exposure in this context.21
The Backpage and FedEx cases offer instructive contrasts for GenAI operators and integrators. In Backpage, liability was established because the platform's operators did not merely host content — they actively edited ads to evade detection, coached users on how to circumvent safeguards, and structured their business model primarily around profiting from illegal activity. This pattern of direct, purposeful facilitation was sufficient to satisfy the demanding mens rea standard. By contrast, FedEx's continued shipment of packages for online pharmacies, even after receiving warnings, was not enough to establish criminal intent. The company provided a generic service and did not take affirmative steps to further illegal drug distribution, an approach distinguishable from Backpage's and likely contributing to dismissal of the charges. For GenAI operators and integrators, this distinction means that general knowledge of potential misuse — even in the face of warnings — is unlikely to trigger liability unless accompanied by concrete, purposeful actions that materially assist criminal conduct.
This principle was recently reaffirmed by the Supreme Court in Taamneh, which held that platforms are not liable for aiding and abetting unless they provide "conscious, voluntary, and culpable participation" in the specific wrongdoing. For GenAI operators and integrators, this means that liability will most likely not arise from simply offering a model that is later misused by bad actors, nor from failing to implement every possible safeguard. Instead, when assessing possible secondary criminal liability, prosecutors and ultimately courts will look for evidence of affirmative conduct — such as intentionally configuring models to bypass safety filters, providing guidance to users on how to generate illegal content, or designing features that specifically enable or enhance criminal use cases.
Beyond individual acts, the overall business model and the prevalence of criminal conduct on a platform will also likely be relevant to the liability analysis. If a GenAI operator or integrator builds or markets a model that is particularly adapted to misuse, or if a substantial portion of its user base is engaged in unlawful activity, courts and prosecutors may view this as evidence of purposeful facilitation. For example, if an operator tailors its GenAI system to serve a market segment known for illicit activity, or if internal communications reveal a strategy of attracting or retaining users engaged in criminal conduct, the risk of secondary liability increases significantly. In these scenarios, the platform's design and business decisions may be seen as affirmatively supporting or encouraging unlawful use, even absent direct involvement in specific crimes.
IV. Conclusion
While criminal liability for GenAI creators, operators, and integrators remains challenging to establish under current legal standards, the risk cannot be disregarded. Even absent intent to harm, criminal liability may attach where there is clear evidence of conscious disregard of specific risks or purposeful, affirmative conduct that facilitates or encourages criminal activity. Past cases and recent guidance make clear that courts and prosecutors will look for specific actions, patterns of disregard, or business models that are closely tied to unlawful conduct before pursuing charges. And, of course, there are other reasons to guard against the potential misuse and exploitation of powerful technologies, including the risks of intrusive oversight by federal and state legislatures, the risks of foreign (especially European) scrutiny, the risks of media criticism, and more.
As GenAI technologies become more powerful and their risks better understood, expectations for responsible oversight will continue to rise. Companies will be best protected by actively monitoring for misuse, responding promptly to credible warnings, and maintaining robust safeguards and transparency. The legal landscape will likely evolve alongside public attention and technological advances, making proactive risk management and ethical practices essential for minimizing exposure and building trust in the age of AI—and while, of course, staying closely informed regarding judicial and legislative developments both domestically and globally.
V. Prologue: Super-Intelligence is (Probably) Coming
The anticipated arrival of super-intelligence — AI systems that surpass human capabilities across nearly every domain — also raises profound philosophical and legal questions for criminal law. Among these: Can AI itself be criminally liable? And if so, what should punishment look like?22
While these questions may seem premature, it is important to grapple with them now. Leading technologists and researchers now predict that artificial general intelligence (AGI) and even artificial superintelligence (ASI) could arrive within the next decade, with some forecasts suggesting a much shorter timeline. While current AI systems already outperform humans in specialized tasks like coding, scientific prediction, and decision-making, the consensus is that broader, more powerful forms of intelligence are likely coming — and potentially sooner than many expect.
In New York Central & Hudson River Railroad Co. v. United States, 212 U.S. 481 (1909), the U.S. Supreme Court established the foundational principle that corporations can be held criminally liable for the illegal acts of their employees, committed within the scope of their employment and intended, at least in part, to benefit the corporation. In this case, the railroad company and its assistant traffic manager were prosecuted for violating federal laws prohibiting the payment of rebates to favored shippers. The company argued that criminal liability should not extend to corporations for acts of individual employees, but the Court rejected this, holding that corporations act only through their agents and can be prosecuted and punished for their agents' criminal acts.
As companies increasingly incorporate GenAI into their operations — including for purposes of making independent decisions, reasoning, planning, and taking actions to achieve goals with minimal human intervention, a concept often referred to as agentic AI — vicarious criminal liability by way of GenAI will become increasingly likely.23
This conceptual leap has already begun in related areas. In Moffatt v. Air Canada, 2024 BCCRT 149 (Feb. 14, 2024), the British Columbia Civil Resolution Tribunal found Air Canada liable for negligent misrepresentation after its website chatbot provided incorrect information about bereavement fares. Air Canada argued that the chatbot was a separate entity, but the Tribunal held the company responsible for all information on its website, whether provided by a traditional static page or a newer chatbot. While the decision did not rely on vicarious liability or respondeat superior, it has been heralded as a landmark in digital accountability, confirming that companies can be held liable for misrepresentations made by GenAI. As AIs become more autonomous and capable, such outcomes seem increasingly likely.
If companies are deemed liable for their AI agents' behaviors, a key question under New York Central will be whether the AI's acts were committed within the scope of the AI's "employment" and intended to benefit the corporation.24 To address these challenges, companies may need to adapt existing human resources and compliance practices to the management of AI agents. This could include developing "AI Codes of Conduct" that define the permissible scope of AI activity, establishing clear boundaries for autonomous decision-making, and implementing robust oversight mechanisms to detect and respond to problematic behavior. Just as organizations train, supervise, and discipline human employees, they may soon be required, or at least prudent, to apply similar standards to their AI systems — ensuring that agentic AI operates within defined ethical and legal parameters, and that any deviation is promptly addressed.
Footnotes
1. Explainable AI ("xAI") has emerged as a relatively recent trend in artificial‑intelligence research, reflecting growing efforts to make increasingly complex machine‑learning models understandable to human users. As modern AI systems have become more opaque, researchers have turned toward techniques aimed at improving transparency, interpretability, and trust—ranging from model‑specific and model‑agnostic methods to broader frameworks for evaluating explainability in high‑stakes contexts such as healthcare and finance. See generally Sayda Umma Hamida et al., Exploring the Landscape of Explainable Artificial Intelligence (XAI): A Systematic Review of Techniques and Applications, 8 Big Data & Cognition Comput. 149 (2024), https://www.mdpi.com/2504-2289/8/11/149. ↑
2. See Nitin Ware, Action‑taking AI is speeding ahead. Let's get some guardrails up., Wash. Post, Jan. 5, 2026, https://www.washingtonpost.com/opinions/2026/01/05/agentic-artificial-intelligence-ai-tech/. ↑
3. In rare cases, which we do not address in this article, strict liability crimes require only proof the defendant engaged in the prohibited conduct, regardless of intent or knowledge that the violative conduct was unlawful. ↑
4. To be clear, by framing these issues as challenges, we do not intend to advocate for expanding criminal liability, but rather want to acknowledge that when real harms occur, society is likely to seek someone to blame—and may look to flex traditional criminal law concepts to fill the perceived gap. ↑
5. See Paul J. Becker, Arthur J. Jipson & Alan S. Bruce, State of Indiana v. Ford Motor Company Revisited, 26 Am. J. Crim. Just. 181 (2002) (providing analysis of the trial and its implications). ↑
6. See, e.g., Arizona Republic, No criminal charges for Uber in fatal Tempe crash (March 5, 2019); BBC News, Uber fatal self-driving car crash: No charges for company (Mar. 6, 2019), https://www.bbc.com/news/technology-47468391; BBC News, Uber's self-driving operator charged over fatal crash (Sept. 16, 2020), https://www.bbc.com/news/technology-54175359. ↑
7. Yavapai County Attorney, Letter Declining Prosecution of Uber Technologies, Inc. (Mar. 5, 2019), https://s3.documentcloud.org/documents/5759641/UberCrashYavapaiRuling03052019.pdf. ↑
8. Arizona Republic, Rafaela Vasquez Pleads Guilty in Fatal Uber Self‑Driving Crash,Ariz. Republic (July 28, 2023), (reporting that the driver was streaming "The Voice"). ↑
9. People v. Knoller, 41 Cal. 4th 139 (2007). ↑
10. Department of Justice, Justice Department Leads Effort to Seize Backpage.Com, the Internet's Leading Forum for Prostitution Ads, and Obtains 93-Count Federal Indictment (April 9, 2018), https://www.justice.gov/archives/opa/pr/justice-department-leads-effort-seize-backpagecom-internet-s-leading-forum-prostitution-ads; United States v. Lacey, Indictment, No. 2:18‑cr‑00422 (D. Ariz. filed Mar. 28, 2018), ECF No. 3, https://www.justice.gov/file/945546/download. ↑
11. Department of Justice, Backpage Principals Convicted of $500M Prostitution Promotion Scheme (November 17, 2023), https://www.justice.gov/archives/opa/pr/backpage-principals-convicted-500m-prostitution-promotion-scheme. ↑
12. See Rosemond v. United States, 572 U.S. 65, 71 (2014) (aiding‑and‑abetting liability requires "an affirmative act in furtherance of that offense" and "the intent of facilitating the offense's commission"); see also DOJ Criminal Resource Manual § 2143 Jury Instruction — Intent To Promote The Carrying On Of The Specified Unlawful Activity — 18 U.S.C. 1956(a)(2)(A) ("The term with the intent to promote the carrying on of specified unlawful activity means that the defendant must have carried out the transportation, transmission, or transfer, or the attempted transportation, transmission or transfer, for the purpose of promoting (that is, to make easier, facilitate or to help bring about) the carrying on of one of the crimes listed as specified crimes within the statute."). ↑
13. United States v. FedEx Corp., Indictment, No. CR 14-380 (N.D. Cal. July 17, 2014), https://www.justice.gov/sites/default/files/usao-ndca/legacy/2014/07/17/FedEx%20Indictment%20-%20July%2017%2C%202014.pdf. ↑
14. Michael Liedtke, FedEx not guilty in drug case after US drops charges, AP News (July 17, 2016), https://apnews.com/general-news-b62deb1fa0184dacb3cf9b15b6af2b74. ↑
15. Id. ↑
16. Joel Rosenblatt, Feds Drop Drug Charges Against FedEx, Insurance Journal (June 13, 2016), https://www.insurancejournal.com/news/national/2016/06/13/416723.htm. ↑
17. Twitter, Inc. v. Taamneh, 598 U.S. 471, 483 (2023). ↑
18. Department of Justice, Acting Assistant Attorney General Matthew R. Galeotti Delivers Remarks at the American Innovation Project Summit in Jackson, Wyoming (August 21, 2025), https://www.justice.gov/opa/speech/acting-assistant-attorney-general-matthew-r-galeotti-delivers-remarks-american. ↑
19. Department of Justice, Acting Assistant Attorney General Matthew R. Galeotti Delivers Remarks at the American Innovation Project Summit in Jackson, Wyoming (August 21, 2025), https://www.justice.gov/opa/speech/acting-assistant-attorney-general-matthew-r-galeotti-delivers-remarks-american. ↑
20. Decentralized finance (DeFi) platforms are blockchain‑based systems that provide financial services without banks or other traditional intermediaries, typically by using smart contracts to automate transactions. ↑
21. Certain criminal statutes impose liability based on knowledge of illegality, rather than requiring proof of purposeful facilitation or secondary liability theories. For example, in United States v. Roman Storm, prosecutors charged the defendant under 18 U.S.C. § 1960 for operating an unlicensed money transmitting business, which criminalizes transactions conducted with knowledge that they involve unlawful activity. Statutes like Section 1960 should be analyzed separately from general aiding and abetting or conspiracy doctrines, as they may impose liability where the defendant is aware of the illegal nature of the conduct, even absent direct assistance or encouragement. ↑
22. At least at present, we are not aware of U.S. case law recognizing AI systems as "persons" capable of holding legal rights or responsibilities and current trends appear against such an expansion of traditional legal principles. In the copyright context, the D.C. Circuit affirmed—on statutory construction grounds—that the Copyright Act requires human authorship, so an AI system cannot be listed as the "author." Thaler v. Perlmutter, 130 F.4th 1039 (D.C. Cir. 2025). The U.S. Copyright Office's 2023 guidance and subsequent 2025 report likewise reaffirm the centrality of human creativity and indicate that purely machine‑generated outputs are not protectable unless there is sufficient human control or modification. Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 88 Fed. Reg. 16190-94 (Mar. 16, 2023); Copyright and Artificial Intelligence, Part 2: Copyrightability (Jan. 2025), https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-2-Copyrightability-Report.pdf. Separately, several states have enacted statutes that preemptively reject AI personhood — Idaho (2022) and Utah (2024) among them. See Idaho Code Ann. § 5-346 ("Notwithstanding any other provisions of law, environmental elements, artificial intelligence, nonhuman animals, and inanimate objects shall not be granted personhood in the state of Idaho."); and Utah Code § 63G-32-102 ("Notwithstanding any other provision of law, a governmental entity may not grant legal personhood to, nor recognize legal personhood in: (1) artificial intelligence; (2) an inanimate object; (3) a body of water; (4) land; (5) real property; (6) atmospheric gases; (7) an astronomical object; (8) weather; (9) a plant; (10) a nonhuman animal; or (11) any other member of a taxonomic domain that is not a human being."). For an overview of these developments and their policy context, see Sital Kalantry, Legal Personhood of Potential People: AI and Embryos, California Law Review Online (Nov. 2025) (available at https://www.californialawreview.org/online/ai-personhood). ↑
23. It is not lost on us, and likely won't be lost on the courts, that the common term for these AIs directly incorporates the concept of agency, which underlies the core concept of vicarious liability. ↑
24. But see Anat Lior, Holding AI Accountable: Addressing AI‑Related Harms Through Existing Tort Doctrines, U. Chi. L. Rev. Online (2024), https://lawreview.uchicago.edu/online-archive/holding-ai-accountable-addressing-ai-related-harms-through-existing-tort-doctrines (arguing that, in the AI context, the traditional distinction between an "agent" and its principal collapses because AI systems function as extensions of the deploying entity, such that their acts should be treated as the acts of the principal itself). ↑
Originally published by Just Security on the 28th of January, 2026.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.