The Situation: Rapid advances in generative artificial intelligence ("AI") have galvanized bipartisan support for a new U.S. legal framework to regulate AI, potentially including creation of a new federal agency.
The Result: In recent Senate committee hearings, key members demonstrated a determination to act quickly, citing what they described as lessons learned from the government's relatively hands-off approach to the early days of social media. The Biden administration has also shown a strong interest in encouraging AI innovation while regulating potential risks.
Looking Ahead: Companies that develop and deploy AI should be prepared for new government attention, potentially including Congressional inquiries, a licensing regime, training and validation standards, and new rules and standards on anti-discrimination.
On May 16, 2023, the Senate Committees on the Judiciary and Homeland Security held hearings with industry experts on the best way to confront the benefits and risks of AI and, in particular, generative AI systems. Generative AI systems generate text, images, or other media in response to prompts and have been widely adopted in recent months due, in part, to their ease of use. Senators praised the innovations sweeping the field and pledged to support the U.S. technological edge—while warning that, without regulation, the proliferation of such systems could pose significant risks, including by spreading false but seemingly genuine text, sound, and images.
Discussion at the hearings focused on three main models of potential AI regulation (distinct from current regulations on the export of AI software and technology—regulations that already exist and are expanding). The first proposed model was an "AI Council," wherein each federal agency would maintain a "Chief AI Officer" ("CAO") who would oversee the use of AI tools internally within an agency and the impact of external AI on an agency's functions and duties. Each CAO would communicate with the CAOs of other agencies, creating a regulatory board within the federal government.
The second proposed model was a new monitoring agency that would analyze the state of the industry and set a series of standards and safeguards. Some likened this to an expert-led "independent" agency such as the National Highway Traffic Safety Administration or the Federal Motor Carrier Safety Administration. This notional new agency would focus on setting standards for the uses of AI rather than regulating its creation or substantive capabilities.
The third proposed model would involve the creation of an agency with the ability to license AI broadly and revoke that license if deemed necessary. Witnesses likened this mandate approvingly to the FDA's ability to test and license drugs before they reach the market. This third option drew bipartisan support, including from members who have found themselves far apart on other issues.
Sen. Lindsey Graham (R-SC) weighed in with support for a new licensing regime, noting: "I just don't understand how you can say that you don't need an agency to deal with the most transformative technology ever...." In Sen. Peter Welch's (D-VT) view, "You don't build a nuclear reactor without getting a license. You don't build an AI system [with potentially destructive capabilities] without getting a license that gets tested independently." Sen. Welch concluded that the rapid advances in AI technologies required a decisive response from Congress—which, Privacy, Technology and the Law Subcommittee Chair Sen. Richard Blumenthal (D-CT) averred, had not occurred during the early days of social media, permitting unforeseen harms to occur. Sen. Josh Hawley (R-MO) warned of generative AI's capacity to misinform the public during elections or cause other harmful secondary order effects, while Sen. Chris Coons (D-CT) asked: "The fundamental question ... is how [do] you decide whether or not [an AI model] is safe enough to deploy and safe enough to have been built, and then let go into the wild[?]."
This rare degree of bipartisan backing for a new form of regulation suggests that Congress may act. Innovators should recognize that today's relatively laissez-faire environment for assembling data sets and training and deploying AI may not endure, and they should be prepared to engage thoughtfully as a potential new regulatory regime takes shape.
If the proposed AI licensing agency ultimately materializes, companies could be required to prove to a new regulator that the benefits and safety precautions of their AI outweigh any potential harms and that they have implemented appropriate guardrails. These requirements could entail additional costs in developing AI systems, the testing of AI systems by independent experts, and a potentially lengthy approval process to ensure both pre-launch and continued compliance with regulatory standards. With AI advancing rapidly and technical expertise currently in high demand, the creation and operation of this new agency could be a complex endeavor, and its rules could have a transformative effect on some companies and projects.
As the federal government inches toward making this new approach a reality, industry should be prepared to comment on a potential new agency's proposed mandate and rules before they go into effect.
Three Key Takeaways
- Key senators have signaled that they want to act decisively to regulate AI and not leave its development to industry self-regulation.
- This movement toward greater regulatory oversight is bipartisan, increasing the likelihood that there will be successful Congressional action.
- Senators have shown an interest in establishing a new federal agency that would issue (and potentially withhold) licenses for companies to develop and deploy AI systems. Innovators should be prepared to engage with a potential new regulator, comment on its proposed rules, and navigate a potential new licensing process.
Originally published May 2023.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.