On Thursday, April 13, Senate Majority Leader Chuck Schumer (D-NY) announced his work with stakeholders on a new legislative framework to regulate artificial intelligence (AI), combined with bolstered oversight efforts. The Majority Leader has engaged industry experts in recent months to solicit feedback on the proposal, and the effort comes as lawmakers grow increasingly concerned about the Chinese government's recent release of its own approach to AI regulation.

The effort, expected to span across multiple congressional committees, is centered on four guardrails: "Who," "Where," "How" and "Protect." The first three guardrails aim to "inform users, give the government the data needed to properly regulate AI technology, and reduce potential harm," while the final guardrail "will focus on aligning these systems with American values and ensuring that AI developers deliver on their promise to create a better world."

Staff from the Majority Leader's office has compared the push to efforts last Congress to pass the CHIPS and Science Act (P.L. 117–167). In the coming weeks, Leader Schumer expects to refine the AI framework through conversations with industry, government officials, academics and advocacy groups.

Lawmakers continue to explore other tailored legislative reforms for AI, including those with a defense focus. Over the last month, the Senate Armed Services Committee's (SASC) Cybersecurity Subcommittee convened two hearings featuring AI discussion, including a hearing to examine the state of AI and machine learning applications to improve U.S. Department of Defense (DoD) operations. During the hearing, Chair Joe Manchin (D-WV) and Sen. Mike Rounds (R-SD) outlined the need to examine legislative solutions to ensure cybersecurity protections in AI platforms and set guidelines for how DoD uses AI. To inform future legislation, the lawmakers asked witnesses testifying—including those from Palantir, Shift5 and RAND Corporation—to share related recommendations as soon as possible.

Further, Reps. Jay Obernolte (R-CA) and Jimmy Panetta (D-CA) have reintroduced the AI for National Security Act (H.R. 1718), which clarifies and codifies the DoD's authority to procure AI-based endpoint security tools in order to improve cyber defenses of their systems.

Outside of congressional efforts, the Federal Trade Commission (FTC) continues to express its growing interest in regulating AI systems. The agency continues to bring AI-focused cases and is engaged in related rulemaking and market studies. In August 2022, the FTC announced an advance notice of proposed rulemaking (ANPRM) requesting public comment on the prevalence of commercial surveillance and data security practices that harm consumers. In particular, the ANPRM contains a section on automated systems, inquiring about the prevalence of algorithmic error. Following the close of the public comment period in November, staff has continued to review public comments.

Most recently, the FTC issued guidance cautioning companies against making false or misleading claims to the public regarding the capabilities of their AI products. Further, at the FTC's 2023 Annual Antitrust Enforcers Summit, Chair Lina Khan outlined the need to protect competition in the emerging market for AI tools, and the agency has also announced the creation of an Office of Technology, where staff has deep expertise across a range of specialized fields, including AI.

The U.S. Department of Commerce has also honed in on these issues, recently releasing a formal public request for comment on accountability measures for AI, including whether potentially risky new AI models should go through a certification process before they are released. The request "focuses on self-regulatory, regulatory, and other measures and policies designed to provide reliable evidence to external stakeholders" or provide assurance "that AI systems are legal, effective, ethical, safe, and otherwise trustworthy." Written comments must be received on or before June 12, 2023.

Federal regulators and Congress will continue to closely scrutinize and take action to ensure the responsible use of AI and other automated tools across web and mobile platforms. The Akin cross-practice AI team continues to monitor forthcoming congressional, administrative and private-stakeholder and international initiatives in this area.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.