ARTICLE
9 December 2025

AI Is A Hot Topic Among Lawmakers, But Who Will Have The Final Say?

B
BakerHostetler

Contributor

Recognized as one of the top firms for client service, BakerHostetler is a leading national law firm that helps clients around the world address their most complex and critical business and regulatory issues. With five core national practice groups — Business, Labor and Employment, Intellectual Property, Litigation, and Tax — the firm has more than 970 lawyers located in 14 offices coast to coast. BakerHostetler is widely regarded as having one of the country’s top 10 tax practices, a nationally recognized litigation practice, an award-winning data privacy practice and an industry-leading business practice. The firm is also recognized internationally for its groundbreaking work recovering more than $13 billion in the Madoff Recovery Initiative, representing the SIPA Trustee for the liquidation of Bernard L. Madoff Investment Securities LLC. Visit bakerlaw.com
The rapid rise of generative artificial intelligence (AI) has pushed the issue of AI regulation to the forefront, but U.S. policy remains fragmented.
United States California Colorado Texas Technology
Janine Anthony Bowen’s articles from BakerHostetler are most popular:
  • with readers working within the Retail & Leisure industries

A summary of the current US landscape

I. Introduction

The rapid rise of generative artificial intelligence (AI) has pushed the issue of AI regulation to the forefront, but U.S. policy remains fragmented. The federal government has not passed comprehensive AI legislation, and current policy initiatives do not align with previous guidance from the National Institute of Standards and Technology (NIST). In the absence of federal action, states are developing their own frameworks for AI regulations, with early actors having the opportunity to become trendsetters. But these AI frameworks may clash with the new administration's pro-innovation stance, potentially leading to federal preemption or intervention in the enforcement of state AI laws.

The federal government whispers about preemption but has failed to take any formal steps toward it. As a result, the U.S. AI regulatory landscape mirrors that of privacy laws: a patchwork of state laws supplemented with nonbinding federal guidance. This presents challenging compliance issues for entities subject to these regulations. The potential for federal preemption or challenges to specific state laws increases the uncertainty for companies developing AI compliance programs. Until the federal government takes some formal action, the AI Risk Management Framework (RMF) is the only nationally applicable guidance these entities can rely on.

In this blog post, we explore the current landscape of AI regulation:

  • Executive orders (EOs)
  • State laws
  • NIST AI RMF

II. EO 14179 formalizes the federal government's pro-innovation approach

President Donald Trump signed Removing Barriers to American Leadership in Artificial Intelligence on January 23, 2025, which revokes previous AI policies and directives in place and affirms a commitment to American AI innovation. This order directs officials to develop an action plan to achieve its goals of "enhanc[ing] America's global AI dominance" for economic and security reasons. Unlike the NIST RMF, which focuses on mitigating risks to AI users, the EO and related materials emphasize accelerating AI innovation.

a. Office of Management and Budget publishes Memorandum M-24-10 directing agency adoption of AI

Memorandum M-24-10, published on April 3, 2025, rescinds and replaces a previous directive on advancing agency AI use. It directs executive departments and independent regulatory agencies to improve public services through AI, providing guidance on documentation and public engagement. Intelligence agencies are either exempt or subject to specialized requirements.

The memorandum outlines a framework to "drive AI innovation," requiring agencies to publish AI strategies; prioritize impact, infrastructure and workforce development; and share AI assets. Agencies1 must support American-made technologies and establish transparent data governance structures. The memorandum also mandates the appointments of chief AI officers and governance boards to oversee AI integration and workforce transformation. These officers coordinate with interagency bodies and ensure compliance with federal standards and policies.

To build public trust in federal AI use, agencies must document and report high-impact AI use cases, conduct pre-deployment testing, perform impact assessments and ensure human oversight. The Memorandum also defines presumed high-impact use cases, outlines risk factors and recommends public feedback mechanisms.

b. The White House reveals America's AI Action Plan

As required by EO 14179, the White House released America's AI Action Plan on July 23, 2025. The plan is built around three strategic pillars designed to position the U.S. as a global leader in AI: Innovation, Infrastructure, and International Diplomacy and Security.

Pillar 1: Innovation. This pillar aims to accelerate AI development by removing regulatory barriers and promoting open-source models. It supports cross-sector AI adoption, workforce reskilling and strengthening the manufacturing supply chain. The plan also emphasizes advancing AI science, establishing data standards, and improving model transparency and reliability. Government adoption of AI is encouraged to enhance efficiency and drive growth while maintaining vigilance around security risks and legal concerns such as deepfakes.

Pillar 2: Infrastructure. This pillar addresses the energy and hardware demands of AI. It proposes streamlining permits for data centers and semiconductor facilities, modernizing the electric grid, and expanding domestic chip manufacturing. Workforce training and infrastructure security are also prioritized to support sustainable AI development.

Pillar 3: International Diplomacy and Security. This pillar focuses on exporting American AI technologies to allies, harmonizing international standards and countering adversarial influence. It emphasizes enforcing export controls, securing the flow of AI components and information, and assigning national security agencies to assess cyber and biosecurity risks. Notably, Pillar 1 targets regulatory barriers that hinder innovation. It calls for their removal and warns that the "Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds." This signals a potential federal intervention strategy, such as preemption, to prevent any state from enforcing so-called "burdensome AI regulations."

c. Federal moratorium has new life after EO draft leak

On November 20, 2025, the International Association of Privacy Professionals published an article describing a draft EO leaked from the White House discussing the executive branch's intention to target burdensome state AI laws. The draft was leaked as Congress reportedly considers adding AI moratorium language to the National Defense Authorization Act. The draft EO would charge various executive departments with identifying state AI laws susceptible to constitutional challenges. Trump has recently publicly supported efforts to address the patchwork of state AI laws, which he described as "cumbersome" and stemming from a "fear-based regulatory capture." The release of this report has renewed discussions of federal preemption and other legal challenges to state AI laws perceived as overly burdensome.

III. Early-adopter states are currently the primary regulators of AI – for now

In the absence of federal regulation, four states (Utah, Colorado, Texas and California) have taken the lead in enacting AI laws. The divergent approaches of these states create a fragmented patchwork of AI regulations, similar to today's privacy law landscape, posing compliance challenges for businesses and policymakers. The executive branch appears eager to deregulate this area, as demonstrated by the AI Action Plan. The leaked draft EO may have a cooling effect on states currently considering AI legislation as they seek to avoid provisions that might draw the scrutiny of the federal government.

a. Overview of State AI legislation

Utah took the lead in enacting AI legislation with its Artificial Intelligence Protection Act (AIPA) in March 2024. AIPA imposes disclosure requirements on entities using generative AI tools to simulate human conversations with consumers. On May 17, 2024, Colorado then passed its comprehensive Artificial Intelligence Act (CAIA), which establishes a broad compliance framework for developers and deployers of "high-risk" AI systems. CAIA requires developers and deployers to mitigate foreseeable risks of algorithmic discrimination, which broadly includes any differential treatment from AI systems used for consequential decisions (e.g., education enrollment or employment opportunities).

On June 22, 2025, Texas followed with the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), which focuses primarily on prohibiting harmful uses of AI by both government entities and private actors, though private actors must act with knowledge or intent to be held liable. Lastly, California's Transparency in Frontier Artificial Intelligence Act (TFAIA), adopted on September 29, 2025, specifically targets powerful AI models known as "frontier models." TFAIA requires these frontier models to have a detailed framework of best practices, mitigation strategies, cybersecurity protocols and assessment processes.

b. Divergent approaches and emerging patchwork

So far, states with AI regulations have adopted distinct approaches. If AI legislation follows the trend of privacy legislation, many or most of the remaining states will adopt one of these approaches. The key differences between them will determine which becomes generally adopted.

One key difference among AI laws is whether they target the developer or the deployer of AI. This distinction shapes the restrictions and obligations imposed. For example, deployer-focused laws, like TRAIGA, prohibit certain harmful AI uses, whereas developer-focused laws require transparency reporting, comprehensive documentation and risk management systems.

Another notable variation among existing AI regulations is whether they address broad risks or specific uses. Some states have primarily adopted general outcome frameworks, which regulate broad categories of high-risk systems, or have amended existing privacy laws to include general AI-related consumer protections. Other states have implemented laws targeting specific uses of AI, such as AI-generated deepfakes. California supplemented its general outcome framework with specific outcome laws, including prohibiting AI chatbots from presenting as medical professionals.

Given the prevalence of narrowly tailored laws and amendments, these may be less politically risky because they are incremental and less comprehensive, which could make TRAIGA a model for states. However, states seeking a broader approach may look to the comprehensive CAIA instead. The choice of each state depends on a variety of factors, including the state's dominant industries, regulatory philosophy and existing legal landscape.

c. Federal preemption risks

The federal administration's approach to AI governance is distinctly pro-innovation. As discussed earlier, the AI Action Plan seeks to deregulate at both the federal and state levels. The AI Action Plan threatened to cut federal funding to states with "burdensome regulations" on AI, and the Senate attempted to pass a 10-year moratorium on state AI laws. The effort failed after a 99-1 vote, but the reports of the leaked draft EO and public support from the president may have shifted the momentum on a federal moratorium. While no such bill is pending, the draft EO shows the executive branch still considers promoting AI innovation to be a top priority.

If the federal government were to attempt preemption, state laws focused on consumer protection and risk mitigation – such as those requiring additional documentation, transparency reporting and incident disclosures – could be at risk. These provisions, although designed to safeguard consumers, also increase operational costs and slow innovation, thereby creating tension with the federal pro-innovation goals. Ultimately, the scope of any federal preemption, whether partial or comprehensive, will determine which state-level measures endure and how states regulate AI in the future. Beyond direct preemption, the leaked draft EO signals that the executive branch would like the federal government to target state AI laws that it views as susceptible to constitutional challenge.

IV. NIST AI RMF focuses on risk management and user protection

NIST developed the AI Risk Management Framework, a voluntary tool designed to protect AI users by helping organizations identify, assess and manage the risks associated with AI systems. Though developed by a federal government entity, the RMF does not form the direct basis for any formal regulations or regulatory enforcement action. Its broad accessibility allows organizations using AI to adopt the RMF at any point in the AI life cycle (development, deployment, etc.). Although the RMF states that it is pro-innovation, its granular requirements for AI risk management differ from the approach outlined in America's AI Action Plan, discussed in Part III.

a. The RMF is ultimately about measuring and evaluating risk

Risk management is a foundational principle of the RMF. Effective risk management leads to more trustworthy AI systems that benefit people, organizations and societal systems, while ineffective risk management can increase both the probability and magnitude of AI-related harms. The RMF emphasizes that effective AI risk management begins with measuring risk based on the probability of an event and the magnitude of its consequences. By using the RMF, organizations can more effectively understand the inherent limitations of AI models, identify the risks arising from these limitations and develop safeguards to address these risks.

b. Promoting trustworthy AI is the RMF's primary goal

The RMF also describes six key characteristics that define trustworthy AI: (1) valid and reliable, (2) safe, (3) secure and resilient, (4) accountable and transparent, (5) explainable and interpretable, and (6) privacy-enhanced. These characteristics are defined by their ideal outcomes or goals. For example, "secure and resilient" describes an AI system that "[p]rotects against unauthorized access and cyber threats" and is "resilient enough to maintain functionality and integrity even in the face of adverse conditions." Different AI systems can satisfy these characteristics, or become trustworthy, in various ways.

c. The RMF breaks down into a clear organizational framework

The RMF core provides outcomes and actions to help organizations discuss, understand and manage AI risks. The core functions are (1) govern, (2) map, (3) measure and (4) manage.

The four core functions are divided into categories and subcategories that outline specific security goals that support broader objectives. For example, Category 2 of the map function states that "categorization of the AI system is performed." Subcategory 2.2 of Category 2 elaborates that "information about the AI system's knowledge limits and how system output may be utilized and overseen by humans is documented" so that organizations have sufficient information to make decisions and take actions The RMF's clear and flexible design lets organizations using AI tailor their use case to achieve desired outcomes.

d. The RMF could serve as the national compliance template - for now

The federal government's pro-innovation approach does not align with the RMF's focus on preventing consumer harm. In many ways, the RMF more closely aligns with the generally applicable AI regulations in states like Colorado. Even for targeted AI laws focusing on specific uses (e.g., deepfakes), the RMF provides a universal framework for measuring and evaluating AI risk. Going forward, entities subject to multiple AI regulations may be able to use the RMF as the basis for many of their compliance decisions, especially considering the deference states give to NIST guidance. Because of that deference, the RMF is the closest thing to a nationally applicable AI regulation, even though it has no force of law. Absent further action from the federal government, the RMF provides the best framework for a comprehensive AI compliance program.

Footnote

1. Excerpt from the section: (1) the term "agency" means any executive department, military department, Government corporation, Government controlled corporation, or other establishment in the executive branch of the Government (including the Executive Office of the President), or any independent regulatory agency, but does not include-

(A) the Government Accountability Office;

(B) Federal Election Commission;

(C) the governments of the District of Columbia and of the territories and possessions of the United States, and their various subdivisions; or

(D) Government-owned contractor-operated facilities, including laboratories engaged in national defense research and production activities.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More