- within Technology topic(s)
- with Inhouse Counsel
- in Europe
- in Europe
- in Europe
- in Europe
- in Europe
- with readers working within the Business & Consumer Services and Retail & Leisure industries
On December 11, 2025, the President of the United States issued an executive order titled "Ensuring a National Policy Framework for Artificial Intelligence," which seeks to deter what it describes as onerous state AI regulation and to move the United States toward a single national AI standard.
The order follows a failed congressional effort to secure a temporary state AI law moratorium that had been floated twice last year. The executive order relies on different legal mechanisms than the proposed moratorium did: it is the exercise of the President's authority to direct the actions of federal agencies, and it coordinates the federal government's response to state AI laws.
The effect of the order will likely lead to case-by-case litigation, and state enforcement will continue unless and until a court enjoins any specific law or provision. As such, the executive order does not provide an immediate relief from state laws applicable to AI systems.
The order articulates a federal policy to sustain and enhance United States' AI dominance through a minimally burdensome national AI framework. To operationalize that policy, it directs the Attorney General to establish an AI Litigation Task Force within 30 days.
The Task Force's mandate is to bring suits challenging state AI laws viewed as inconsistent with federal policy, including on Dormant Commerce Clause grounds and preemption by existing federal regulations, among other theories.
The Secretary of Commerce must, within 90 days, publish an evaluation of existing state AI laws that identifies "onerous" laws in conflict with the stated federal policy and other state laws that should be referred to the Task Force for potential challenge.
The order specifically states that the evaluation must, at a minimum, flag state laws that require AI models to "alter their truthful outputs" and laws that compel disclosures or reporting in ways that could violate the First Amendment or other constitutional guarantees. The evaluation may also recognize state measures that the administration believes promote AI innovation.
The order also uses federal funding to shape the AI policy outcomes. The Department of Commerce (DOC) is directed to issue a policy notice specifying conditions for receiving remaining non-deployment funds under the Broadband Equity Access and Deployment (BEAD) program.
The effect of the order will likely lead to case-by-case litigation, and state enforcement will continue unless and until a court enjoins any specific law or provision.
That notice will pronounce states with "onerous" AI laws identified in the DOC's evaluation ineligible for those nondeployment funds and explain how a fragmented AI regulatory landscape could undermine BEAD's mission. Executive agencies must assess whether discretionary grants can be conditioned on states refraining from enacting or enforcing AI laws that conflict with the federal policy
The Federal Communications Commission (FCC) Chair is called on to begin a proceeding to determine whether to adopt a federal reporting and disclosure standard for AI models that would preempt conflicting state requirements.
Separately, the Federal Trade Commission (FTC) Chair is asked to issue a policy statement explaining the application of the FTC Act's prohibition on unfair or deceptive acts or practices to AI models, including the circumstances in which state laws that require alterations to truthful AI outputs would be treated as preempted by the federal prohibition on engaging in deceptive acts or practices affecting commerce.
Finally, the order tasks the White House Special Advisor for AI and Crypto and Assistant to the President for Science and Technology with developing legislative recommendations for a uniform federal AI framework that would preempt state AI laws that conflict with the proposed federal policy
The order expressly contemplates carve-outs in the legislative proposal for child safety protections, AI compute and data center infrastructure, and state procurement and use of AI, with room for additional topics to be determined.
What the order does and does not do
The order organizes the executive branch and signals litigation priorities, but it does not directly preempt any state law. Preemption is governed by the Supremacy Clause and arises from federal statutes enacted by Congress or from federal rules with the force of law promulgated by agencies pursuant to a clear congressional delegation
An executive order, standing alone, neither creates new statutory obligations nor displaces contrary state statutes. State AI statutes remain in force and enforceable by state officials unless and until a court enjoins them on a challenge brought by the Department of Justice or private parties.
The order directs the DOJ to bring challenges to "onerous" AI laws and asks agencies to take steps that, if they result in binding federal rules, could bolster narrower conflict preemption arguments.
Even those agency actions face threshold questions. The FCC's jurisdiction is subject-matter specific, and any AI-reporting standard would need a clear nexus to communications services and an actual conflict with identified state rules to sustain a preemption claim. The FTC is called upon to issue a policy statement, which would not in and of itself be a binding rule that can form the basis of preemption for state laws
The order's funding condition provisions, likewise, face limits. Under the Spending Clause, federal conditions on grants to states must be, among other things, unambiguous, related to the federal interest in the program, and not so coercive as to compel state policy choices.
Whether BEAD non-deployment funding conditions tied to state AI laws satisfy those standards will depend on how the DOC crafts the notice and connects the conditions to BEAD's statutory objectives. Legal challenges by the states grounded in the anticommandeering principle are likely if the federal conditions are seen as excessive or unrelated to the underlying program
Inevitably, state AI laws that conflict with the administration's AI policy will be subject to litigation. States are also likely to challenge the executive order and the resulting agency actions. Consequently, differing interpretations from various courts could create an additional layer of complexity to the existing patchwork of state regulation.
Gray areas and likely early targets
Much in the executive order is up to interpretation. The order does not define "onerous," nor is it clear whether only the laws that seek to solely regulate AI are in scope, or if more general laws that have impact on AI will be considered. For example, new regulations under the California Consumer Privacy Act (CCPA) regulate the use of automated decision-making technology (ADMT) for "significant decisions."
Given the executive order's focus on laws that seek to address possible bias, as discussed below, CCPA ADMT regulations may be a possible target as well, especially as California is called out in the fact sheet accompanying the executive order for "considering requiring AI companies to censor outputs."
Even if the administration chooses to focus solely on AI laws, over a thousand AI bills have been introduced, and dozens of AI laws have been passed already on the state level, making it difficult to predict the scope of the administration's focus at this stage.
The DOC's initial list is likely to reflect both the administration's legal theories for challenge and policy priorities. It will also be interesting to see which laws are not included in the DOC list, as this may indicate to the states that they can safely model their laws after those not targeted.
A notable feature of the executive order is its repeated reference to state AI laws that require AI models to alter their truthful outputs.
Commentators have consistently read that phrase as a critique of algorithmic discrimination and bias mitigation provisions that use disparate-impact concepts, consistent with the earlier Executive Order 14319 titled "Preventing Woke AI in the Federal Government" and the language of this executive order that flags Colorado law's ban of "algorithmic discrimination."
The bias mitigation provisions typically require developers or deployers of high-risk AI systems to implement risk management programs, conduct impact assessments, monitor for discriminatory outcomes, and mitigate known or reasonably foreseeable risks of algorithmic discrimination
The administration's position appears to be that state AI laws requiring bias mitigation may pressure designers or users to adjust models or outputs in ways that depart from what the order characterizes as "truthful model behavior."
Furthermore, the administration is likely to argue that certain disclosure mandates for AI models operate as compelled speech in violation of the First Amendment. The order explicitly instructs the DOC to identify state laws that compel developers or deployers to disclose or report information in ways that would violate constitutional protections.
That instruction appears to point to transparency and reporting obligations that require publication of governance frameworks, incident reports, or other specific content. Courts have upheld many commercial disclosure regimes, but compelled speech challenges can succeed where requirements are not purely factual or are unduly burdensome relative to a legitimate governmental interest.
The order calls out Colorado's AI Act, set to take effect on February 1, 2026, which imposes a duty of reasonable care on developers and deployers of high-risk AI systems to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination in consequential decisions, such as employment, lending, housing, education, and government services.
It requires deployers to implement risk management programs and conduct impact assessments, and it requires developers to supply documentation to support those assessments and disclose known risks.
Colorado law is widely expected to be the first target of a DOJ suit and offers the opportunity for the administration to test whether courts accept the premise that disparate impact-based mitigation obligations unconstitutionally distort model outputs. Other state AI laws that focus on bias and discrimination, or consumer deception, are likely targets as well.
California and New York laws are not specifically mentioned in the final executive order but are widely acknowledged as other possible targets. Reportedly, California's Transparency in Frontier Artificial Intelligence Act (TFAIA), effective January 1, 2026, was mentioned in the earlier version of the executive order made available to the public in November 2025.
TFAIA focuses on frontier models at very high compute thresholds and requires developers, particularly large frontier developers, to publish governance frameworks that describe how catastrophic risks are identified and mitigated, to publish transparency reports before deployment, to report critical safety incidents to the state Office of Emergency Services on specified timelines, and to implement whistleblower protections.
New York's Responsible AI Safety and Education (RAISE) Act, signed into law December 19, 2025, establishes transparency and safety obligations for large developers of frontier models and "builds on California's recently-adopted framework," according to the statement on the New York governor's website.
The most plausible federal challenges would likely target compelled disclosure and reporting obligations as infringing free speech, on alleged extraterritorial effects or, if the FCC proceeds, conflicting with any federal AI reporting standard the DOC adopts within its jurisdiction.
At the same time, California and New York tailored their regimes to catastrophic risk and limited their scope to frontier developers, which may make sweeping federal challenges harder to sustain.
There are other plausible targets. For example, several states have advanced rules addressing use of synthetic media in political ads or before elections, requiring specific disclaimers. The DOC evaluation will be a key early signal for where the administration intends to concentrate its resources.
Against that backdrop, the order's most immediate practical impact will be to funnel selected test cases into federal court and to accelerate agency processes that could, over time, provide additional preemption arguments.
Absent a federal law, however, the fate of such litigation remains uncertain, not least because most of these laws have yet to be challenged based on existing federal frameworks. State enforcement will proceed while those cases wind through district courts and appeals.
Conclusion
The December 11 executive order signals a federal preference for a minimally burdensome national standard and is an ambitious bid to shape the national AI policy through coordinated litigation, funding conditions, and agency agenda.
With the constraints of the federal preemption doctrine to navigate, coupled with the time horizons of litigation and rulemaking, rapid, sweeping victories that significantly alter the state AI-law landscape do not appear likely in the near future. The most reliable path to durable national regulatory uniformity remains federal legislation, which has proven elusive to date.
Until Congress acts, businesses operating in multiple jurisdictions should continue to build compliance programs around the state statutes on the books, even as they monitor federal developments.
Originally published by Thomson Reuters
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.