Last week, Congress voted to pass the "One Big Beautiful Bill Act," codifying President Trump's domestic agenda, but one important measure did not make it past the finish line: the AI regulatory moratorium that was originally included in the bill. (See our coverage of the demise of the AI moratorium here and of the now defunct measure here.) The AI moratorium would have stopped in their tracks more than 1,000 AI regulatory bills that have been making their way through the legislative processes in state capitals since January—a legislative tsunami that amounts to half a dozen new AI-related proposals every single day.
The frenzied legislative activity in 2025 follows a year in which 45 states considered nearly 700 AI bills, with about 20% becoming law.
With the AI moratorium in the rearview mirror, attention now turns to understanding and preparing to implement these new AI laws. The regulatory gold rush represents an unprecedented response to an emerging technology, dwarfing the policymaking pace seen with previous innovations such as the internet, cloud, or social media. The sheer volume signals both lawmakers' urgency to address AI's growing influence and their uncertainty about how to do it effectively. Who could blame them? To be sure, everyone's feeling the hype and urgency around the US maintaining its competitive advantage on the global stage with this transformative technological landmark. But at the same time, any technology that presents risks such as the end of humanity will surely draw the ire of regulators.
A Patchwork of Approaches
The thousand-bill milestone reflects not coordination but chaos. States are pursuing widely different approaches, creating what critics warn could become a regulatory nightmare for businesses operating across state lines.
The proposals fall into several broad categories, each reflecting different concerns about AI's rapid advancement. To help you make sense of the deluge, we offer the following taxonomy:
Transparency requirements dominate the legislative landscape. California's AB 2013, taking effect in January 2026, requires developers of generative AI systems to publish detailed documentation about their training data — a model being copied by numerous other states. Other transparency bills mandate watermarking AI output or requiring disclosure when users are interacting with artificial intelligence rather than humans.
Algorithmic fairness laws represent another major category, largely modeled after Colorado's pioneering AI Act. Effective February 2026, Colorado's law requires AI developers to mitigate bias risks in "consequential decisions" where AI plays a "substantial factor." Texas is the latest state to adopt such a bill, the Texas Responsible AI Governance Act ("TRAIGA"), which will become effective a month before Colorado's much heralded act. While narrower in its ambition than a 2024 draft of the same law, which was proposed by Republican state representative Capriglione before that year's national election, TRAIGA still presents formidable guardrails for AI, prohibiting intentional development or deployment of AI to discriminate, impair constitutional rights, or incite harmful or criminal acts. It also requires businesses to provide the Texas Attorney General, upon request, "high-level" information regarding their AI systems, including descriptions of training data, performance metrics, and post-deployment monitoring and user safeguards. Similar bills are advancing in Connecticut, Illinois, and other states, though Virginia's governor recently vetoed comparable legislation.
Safety regulations have proved most contentious. California's notorious SB 1047 — vetoed by Governor Gavin Newsom but now being reconsidered — would have required developers of powerful AI models to deploy safety protocols, including an emergency "kill switch": if in doubt, just turn it off. The bill sparked fierce debate between innovation advocates, including the main AI labs, and safety proponents.
Accountability measures round out the major categories, with bills requiring human oversight, governance structures, and liability frameworks for AI systems' outputs.
Key Battlegrounds
Beyond these broad approaches, states are crafting sector-specific rules that reflect AI's expanding reach into every area of daily life.
Employment has emerged as a key battleground, building on New York City's Local Law 144, which, having turned two years old this month, feels to some like a regulatory dinosaur — though a prescient one in an area that moves a mile a minute. NYC144, as it's referred to in the industry, requires automated hiring tools to undergo bias audits. Texas, Connecticut, and Illinois are among states pursuing similar measures, while some proposals would regulate AI's role in wage-setting and performance evaluation. Notably, the algorithmic fairness laws discussed above, including Colorado's AI Act, typically categorize AI systems deployed in employment settings as high-risk, adding another layer of regulation in an area where federal regulators, such as the Equal Employment Opportunity Commission, have also weighed in.
Healthcare regulations in this space focus on preventing AI systems from impersonating licensed professionals and requiring safety audits for medical chatbots. Texas and New York are leading efforts to establish licensing requirements for health-related AI applications. Several states, including Kentucky, Mississippi, New York, and Utah, have already passed laws focused on transparency requirements as well as state oversight and enforcement mechanisms. Utah's HB 452, which went into effect in May, is setting an example for other states interested in regulating AI mental health chatbots. These chatbots include not only generative AI used to engage in conversations a patient would typically have with a licensed therapist but also any generative AI that a reasonable person would believe can provide mental health therapy or help manage or treat mental health conditions.
Education proposals, exemplified by California's AB 1064, seek to create regulatory frameworks specifically for AI products interacting with children — a response to concerns about data privacy and developmental impacts. California's bill, titled the Leading Ethical AI Development (LEAD) for Kids Act, aims to ensure the ethical development and deployment of AI technologies and to protect children's well-being. In some instances, the bill would outright prohibit the development of certain AI systems, including companion chatbots, for kids. New York has also passed a couple of recent bills, such as the SAFE For Kids Act, intended to protect kids when using addictive technology.
Financial sector businesses are traditionally highly regulated. Indeed, the general AI governance laws categorize AI-based actions in credit or insurance as "consequential decisions," requiring risk assessments, bias mitigation, and transparency safeguards. Industry-specific laws, such as New York's Assembly Bill A773B, relate to the use of automated lending decision-making tools by banks, allowing loan applicants to consent to or opt out of such use.
The result is a regulatory landscape that varies dramatically from state to state, with potentially serious implications for businesses trying to deploy AI solutions nationally, not to mention globally.
The Federal Vacuum
This state-level frenzy reflects Congress's failure to establish federal AI standards, leaving individual states to fill the regulatory void. The Commerce Clause was designed to prevent exactly this kind of balkanized approach to interstate commerce, legal experts note. "Fifty different AI regulatory regimes will undermine America's ability to compete with China and other adversaries in the global AI race," warned Kevin Frazier, an AI Innovation and Law Fellow at UT Austin School of Law. He and others advocate for federal preemption legislation to create uniform national standards.
The incoming Trump administration has signaled interest in boosting AI development and maintaining American technological leadership, potentially setting up conflicts with restrictive state regulations.
Innovation vs. Precaution
The legislative surge reflects a fundamental tension between promoting innovation and preventing harm. Critics argue that given the nascent technology, many bills are based on hypothetical risks rather than documented problems, potentially constraining beneficial AI development before its full potential is realized.
Supporters counter that waiting for harm to occur before regulating is irresponsible given AI's potential for widespread impact. They point to documented issues such as algorithmic bias in hiring and lending as evidence that regulation is already overdue. Moreover, they claim that the wave of AI legislation plays out against a background of existing US and state laws, which already govern AI, including IP, privacy, fair housing, employment, credit and education, tort law, product liability, and more. AI laws, they assert, better adapt already applicable regulatory frameworks to a new nomenclature, which includes terms such as "model weights" and "RAG."
The Business Software Alliance, which tracks AI legislation, expects the pace to accelerate further in 2025, with particular focus on high-risk AI applications in Connecticut, Texas, and California.
Looking Ahead
As state legislatures conclude the 2025 session, the AI regulatory race shows no signs of slowing. The thousand-bill milestone, which was reached back in April, in just four months, suggests this year could see even more legislative activity than 2024's record-breaking pace.
For businesses, the message is clear: the Wild West era of AI development is rapidly ending, replaced by a complex patchwork of state regulations that will require careful navigation. Whether federal intervention will bring coherence to this chaos — or whether the current fragmented approach will persist — remains one of the biggest unanswered questions in American technology policy.
The stakes couldn't be higher. Get the balance wrong, and America risks either stifling the next great technological revolution or failing to prevent its potential harms. With dozens of new AI bills introduced weekly, lawmakers are writing the rules for a technology that could reshape society — sometimes without fully understanding how these rules would apply to a technology that changes daily before our eyes.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.