Congress Considers Moratorium on State-Level AI Regulation
The U.S. House of Representatives has passed, as part of a larger omnibus budget bill, a 10 year moratorium on regulation of AI by the states. If passed by the Senate, this provision would be one of the most significant federal technology policy actions in years, providing a stark limitation on the ability of states to regulate "artificial intelligence models, artificial intelligence systems, or automated decision systems" for 10 years after enactment. Since the bill's passage in the House—and following commentary from various stakeholders on the adoption of a comprehensive moratorium—the Senate has proposed further revisions to the language of the moratorium. Specifically, rather than outright banning AI regulation by the states, the Senate's version would condition access to the $500 million in federal AI development funds on a state's agreement to refrain from imposing such regulations.
Though significant on its face, the provision (if passed in its current form) would include a number of exceptions that would still allow for certain, limited regulation at the state level. In particular, the moratorium would not apply to any laws or regulations that:
- Remove legal impediments to deployment or operation of AI;
- Streamline licensing and other regulatory burdens in order to facilitate the adoption of AI;
- Don't impose any substantive design, civil liability, or "other requirements" on AI, unless the requirement is required under federal law OR is required under a more generally applicable law that is applied in the same manner to systems which provide "comparable functions" as AI (that is, generally applicable laws which apply the same requirements on non-AI and AI systems); and
- Don't impose a fee or bond.
Application of the moratorium (whether in the original House version or the revised Senate version) will likely provide a legislative challenge to states that have begun to consider regulation of AI models. That being said, the exceptions to the moratorium are expected to offer a fair amount of leeway in state enforcements, with states still able to apply useful regulations to AI under broader, more generally applicable statutes.
Lawmakers have put forth various justifications for the moratorium, which will likely fall under closer scrutiny during the Senate's consideration of the bill. For example, lawmakers have pointed to the patchwork of comprehensive data privacy laws among the states, which provide overlapping but not necessarily coextensive scopes of protection, as an impediment to innovation—and which may inhibit larger adoption of, and development of, AI systems in the consumer space. Relatedly, the moratorium may signal lawmakers' desires for larger federal response to the growing ubiquity of AI throughout the country. The moratorium, which does not ban federal regulations in this space, could be seen as an indication that the federal government intends to put forth regulation in this space, which would preempt individual state laws. Alternatively, the moratorium may act to encourage further AI adoption, not unlike the Internet Tax Freedom Act of 1998, which placed a moratorium on taxation of certain internet and electronic commerce.
Regardless of the underlying motivation for the moratorium, if either version is adopted, it will certainly cause a stir in both the AI and privacy spaces. It may push states to double down on adoption and enforcement of existing data privacy (or other) regulations as further AI-specific regulations will be heavily restricted. Even if not passed, consideration of the moratorium indicates a likely intent by Congress to become more involved in AI regulation in the future. Our team will continue to closely monitor both federal and states responses as they develop further.
New AI Diffusion Rule on the Horizon: Commerce Secretary Lutnick Spells Out on U.S. Strategy
During a June 4, 2025, Senate Appropriations Committee hearing on the Department of Commerce's FY26 budget request, Secretary Howard Lutnick outlined the Trump administration's evolving strategy for AI technology diffusion and export controls. Notably, Lutnick announced that the Commerce Department is actively drafting a replacement for the recently repealed Biden-era AI diffusion rule, which he candidly described as a "very confusing" and "illogical" tiered export restriction system that had been "hastily rushed through" at the end of the previous administration. He specifically cited an instance where the Prime Minister of Poland expressed frustration over their country's categorization under the old rule, which would have heavily restricted Poland from accessing advanced U.S. AI compute capabilities and GPU access and effectively kept it at least a generation behind the frontier of AI development. According to Lutnick, the forthcoming rule will still facilitate U.S. chip exports to allies under stringent conditions, notably requiring that AI chips be run by approved U.S. data center operators with associated cloud infrastructure also managed by U.S. entities, thereby ensuring continued U.S. control over the advanced technology. He anticipates the release of this new rule "pretty soon," though he could not provide more specificity at this time.
Lutnick also defended current U.S. export controls, asserting that they are "protecting cutting-edge technologies and finally protecting critical industries and American innovation" by aggressively targeting illegal technology exports. This stance comes amidst ongoing industry concerns that current U.S. export controls risk weakening U.S. leadership and empowering Chinese AI firms by potentially driving talent to rivals. Lutnick also addressed senators' concerns regarding a new AI deal with the United Arab Emirates, clarifying that the agreement requires the UAE to invest in building data centers in the U.S. if they purchase a significant quantity of chips. He underscored the administration's commitment to ensuring that "more than 50% of compute must be on our shores.
There's a New Sheriff in Town: The Texas Responsible Artificial Intelligence Governance Act
On June 2, 2025, the Texas Legislature passed the Texas Responsible Artificial Intelligence Governance Act, positioning Texas as a leader among states in regulating AI technologies. The Act is currently awaiting the governor's signature by June 22 and is set to take effect January 1, 2026. It seeks to put guardrails in place around AI systems to address growing concerns around the ethical use, transparency, and accountability of this fast-growing technology.
Key Provisions
If enacted, the Act will impose the most comprehensive framework to date for the responsible deployment of AI. Among its core provisions, the Act requires entities deploying AI systems to conduct thorough risk management programs to identify potential biases and discriminatory outcomes and sets forth boundaries on the use of biometric data. The Act also bans AI human behavior manipulation tools, government use of AI to carry out "social scoring" (which classifies people based on certain behavior or characteristics), and prohibits AI systems from censoring or limiting access to political content or infringing on freedom of expression or association.
The legislation also creates an AI "sandbox" program, a controlled environment where businesses can test AI systems free from full regulatory compliance for up to 36 months without being penalized. Additionally, the Act establishes the Texas Artificial Intelligence Council, a 10-member advisory body within the Department of Information Resources tasked with issuing guidance, monitoring compliance, and enforcing penalties for violations.
Takeaways for Texas Businesses
- Texas is poised to become one of the first states with such a sweeping AI regulatory framework, which will likely serve as a model for other states considering similar legislation.
- Businesses operating in Texas should begin preparing for the Act's requirements by evaluating their current AI systems, implementing risk management and transparency measures, and developing internal policies to ensure compliance.
- The Act's broad scope means that organizations across a wide range of industries—not just technology companies—will be affected, particularly those utilizing AI in high-impact areas.
- Early engagement with legal, compliance, and technical teams will be essential to navigate the new obligations and avoid potential enforcement actions once the Act takes effect.
In summary, the Texas Responsible Artificial Intelligence Governance Act represents a significant AI regulation, which private and public entities should monitor. The Act highlights that a commitment to responsible AI governance will be critical for organizations as the regulatory environment develops.
Quick Links
For additional insights on AI, check out Baker Botts' thought leadership in this area:
- AI Counsel Code: In the latest episode, "AI Legal Challenges in Fair Use and Model Training," Maggie Welsh and Joe Cahill discuss the latest report from the Copyright Office on fair use in AI model training. The report delves into the use of copyrighted works during AI development and considerations around copyright infringement and fair use. Key topics include data collection, training processes, and the complexities of licensing in AI. The nuanced legal landscape requires careful analysis of specific cases, emphasizing the importance of understanding both the technology and legal precedents.
- Navigating Global Approaches to Artificial Intelligence Regulation: Associate Joe Cahill provides an overview of how artificial intelligence regulation is being approached in the European Union, the United States, and the United Kingdom in this article published in the July-August 2025 issue of The Global Regulatory Developments Journal, Volume 2, Number 4.
- Forging Clarity: A Framework for Navigating AI Regulation: This article by associate Joe Cahill and Partner Rich Harper discusses the AI regulation landscape in the United States, highlighting the emerging regulatory patchwork from various governing bodies, such as Colorado's AI Act and the EU's AI Act, creating uncertainty for businesses.
- Baker Weekend AI Learning Lab: At this year's Baker Weekend, our summer associates dove into the future of law with a hands-on AI Learning Lab. In just 90 minutes, they worked through a full litigation task chain using GenAI tools—learning where AI helps, where it falls short, and why human judgment remains essential. See full post here.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.