Senate Releases Bipartisan AI Roadmap

Holland & Knight


Holland & Knight is a global law firm with nearly 2,000 lawyers in offices throughout the world. Our attorneys provide representation in litigation, business, real estate, healthcare and governmental law. Interdisciplinary practice groups and industry-based teams provide clients with access to attorneys throughout the firm, regardless of location.
A group of four bipartisan senators known as the Bipartisan Senate Artificial Intelligence (AI) Working Group (Working Group) on May 15, 2024, released "Driving U.S. Innovation in Artificial Intelligence...
United States Technology
To print this article, all you need is to be registered or login on


  • The Bipartisan Senate Artificial Intelligence (AI) Working Group on May 15, 2024, released "Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate" (Roadmap).
  • The Roadmap comes at the end of months of high-profile forums, classified briefings and other information-gathering efforts with experts and stakeholders and sets the stage for upcoming AI legislation in the Senate.
  • Each of the Roadmap's eight sections contains multiple policy recommendations as detailed in this Holland & Knight alert.

A group of four bipartisan senators known as the Bipartisan Senate Artificial Intelligence (AI) Working Group (Working Group) on May 15, 2024, released "Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate"1 (Roadmap). The members of the Bipartisan Senate AI Working Group include Senate Majority Chuck Schumer (D-N.Y.) and Sens. Mike Rounds (R-S.D.), Martin Heinrich (D-N.M.) and Todd Young (R-Ind.). The Roadmap is the culmination of months of high-profile forums, classified all-senators briefings, listening sessions and behind-the-scenes discussions with experts and stakeholders and will set the stage for upcoming AI legislation in the Senate.

Majority Leader Schumer organized the Working Group to identify bipartisan solutions to address the potentially profound economic and national security effects AI will have on the U.S. economy. The Working Group complements the Senate's standing committees, recognizing that AI will impact the policy issues across a large number of committee jurisdictions. The Roadmap is designed to identify areas of policy consensus that merit consideration by committees in the remainder of the 118th Congress and beyond.

The Roadmap is divided into eight sections that mirror the topics of the forums the Working Group held:

  • Supporting U.S. Innovation in AI
  • AI and the Workforce
  • High Impact Uses of AI
  • Elections and Democracy
  • Privacy and Liability
  • Transparency, Explainability, Intellectual Property, and Copyright
  • Safeguarding Against AI Risks
  • National Security

Each of the sections of the Roadmap contains multiple policy recommendations, which are detailed below. From advancing a national autonomous vehicle framework to significant new federal investments in AI, the Roadmap identifies key areas that will allow the U.S. to maximize the benefits of AI and minimize the risks, while keeping up with its strategic competitors such as China.

In press releases, a press conference and stakeholder calls, members of the Working Group discussed their hope and expectation that the Senate's standing committees would use the Roadmap as a tool for crafting legislation that could attract broad bipartisan support. Though there may not be time to pass a comprehensive AI bill in the remainder of the 118th Congress, which ends on Jan. 3, 2025, Working Group members expect progress on many of the recommendations and hope to enact several "base hits" – bills from the Roadmap targeting discrete issues signed into law – this year. A number of AI bills are already progressing through the standing committees and approaching markup this summer. Given Majority Leader Schumer's role in organizing the Working Group, it is expected that he will ensure that Congress starts to approve AI legislation in 2024.

Supporting U.S. Innovation in AI

The Roadmap recommends support for federal investment in AI, including ramping up to the $32 billion for annual nondefense AI R&D proposed by the National Security Commission on Artificial Intelligence (NSCAI) in its final report.2 Within that overall number, the Roadmap recommends emergency appropriations language to fill the gap, including:

  • a cross-government AI R&D effort, including infrastructure, encompassing the U.S. Department of Energy (DOE), U.S. Department of Commerce (DOC), National Science Foundation (NSF), National Institute for Standards and Technology (NIST), National Institutes of Health (NIH), NASA and other relevant agencies
  • funding outstanding CHIPS and Science Act accounts that have not yet been fully funded:
    • NSF Directorate for Technology, Innovation, and Partnerships
    • DOC Regional Technology and Innovation Hubs (Tech Hubs)
    • DOE National Labs through the Advanced Scientific Computing Research Program in the DOE Office of Science
    • DOE Microelectronics Programs
    • NSF Education and Workforce Programs, including the Advanced Technical Education (ATE) Program
  • funding for DOC, DOE, NSF and U.S. Department of Defense (DOD) semiconductor R&D for future AI chips
  • authorizing the National AI Research Resource (NAIRR) by passing and funding the CREATE AI Act (S. 2714),3 as well as expanding programs such as the NAIRR and the National AI Research Institutes
  • funding AI Grand Challenge programs, such as those described in the Future of AI Innovation Act (S. 4178)4 and the AI Grand Challenges Act (S. 4236)5
  • funding NIST infrastructure, including AI testing and evaluation infrastructure and the U.S. AI Safety Institute
  • funding the Bureau of Industry and Security (BIS) to update information technology (IT), hire staff and enhance support for monitoring efforts to ensure compliance with export control regulations
  • funding R&D activities at the intersection of AI and robotics
  • funding a DOD-NIST testbed to develop new materials for advanced manufacturing related to AI, including integration with other technologies such as quantum computing and robotics
  • funding election assistance through Help America Vote Act grants
  • funding modernization of federal service delivery and federal IT infrastructure
  • funding R&D and interagency coordination at the intersection of AI and critical infrastructure, including smart cities and intelligent transportation systems

The Roadmap recommends emergency appropriations priorities in the area of national security, including:

  • funding for National Nuclear Security Administration (NNSA) testbeds and model evaluation tools
  • funding to assess and mitigate AI-enhanced Chemical, Biological, Radiological, and Nuclear (CBRN) threats by the DOD, U.S. Department of Homeland Security (DHS), DOE and other agencies
  • funding for AI-augmented chemical and biological synthesis R&D, with safeguards to reduce the risk of dangerous synthetic materials and pathogens
  • increased funding for AI-related R&D at the Defense Advanced Research Projects Agency (DARPA)
  • development of secure and trustworthy algorithms for autonomy in DOD platforms
  • funding to develop and deploy Combined Joint All-Domain Command and Control (CJADC2) and similar capabilities by the DOD
  • development of AI tools to improve the operation of weapons platforms
  • funding to enhance the storage and use of data from sensors and other sources, including Special Access Programs
  • funding to expand supercomputing and AI capacity within the DOD, including infrastructure, staff and training
  • use of AUKUS authorities for co-development of integrated AI capabilities
  • development of AI-integrated tools to more efficiently implement federal acquisition regulations
  • funding to use AI to optimize DOD logistics, including workflows and predictive maintenance

The Roadmap promotes public-private partnerships, encourages further federal study of AI through federally funded R&D centers (FFRDCs), encourages attention to the needs of startup companies and supports a Comptroller General report to identify federal laws and regulations that affect AI and the ability of companies of all sizes to compete in the space.

The Roadmap encourages standing committees to work with the DOC to increase access to tools for AI companies to use for testing, encourage DOC and other relevant agencies such as the U.S. Small Business Administration (SBA) to meet small businesses' AI-related needs, and clarify that business software and cloud computing services are allowable expenses under the SBA's 7(a) loan program.

AI and the Workforce

The Roadmap notes broad concern about the potential for AI to impact jobs throughout the economy, including displacement of workers. The Roadmap encourages:

  • efforts to ensure consultation with stakeholders as AI tools are developed and deployed
  • specific consultation efforts related to the procurement and use of AI systems by federal agencies
  • development of legislation related to training, retraining and upskilling the private sector workforce to successfully participate in an AI-enabled economy, including incentives for integration, reskilling and retraining through community colleges and universities
  • exploration of the impact of AI on future of work
  • legislation to improve the U.S. immigration system for highly skilled science, technology, engineering, and mathematics (STEM) workers in support of national security and to foster advances in AI generally

The Working Group recognizes opportunities for AI to improve government service delivery and recommends that committees look for ways to leverage federal recruitment programs to attract AI talent to federal service. The report notes with approval the Workforce Data for Analyzing and Tracking Automation Act (S. 2138),6 which would authorize the Bureau of Labor Statistics (BLS) to record the effect of automation on the workforce and measure those trends over time.

High Impact Uses of AI

The Working Group flags several concerns about AI in the Roadmap, including the possibility that as a "black box," an AI system might not enable a company using it to appropriately abide by existing laws. The Working Group asks standing committees to examine this issue and legislate where gaps may exist, including in areas such as transparency, explainability and testing and evaluation.

The Working Group focuses on the risk of AI directly or accidentally infringing on constitutional rights, imperiling public safety or violating antidiscrimination laws, including the potential for disparate impact and unintended harmful bias, and encourages committees to explore these issues when legislating.

The Working Group encourages standing committees to:

  • review forthcoming agency guidance and determine when an explainability requirement may be needed
  • develop legislation supporting standards for use of AI in critical infrastructure
  • support the Energy Information Administration in including data center and supercomputing cluster use in its voluntary surveys
  • develop legislation ensuring that financial service providers are using accurate and representative data in their AI models and that financial regulators have appropriate enforcement tools
  • investigate AI opportunities and risks in the housing sector, focusing on transparency and accountability
  • ensure appropriate testing and evaluation of AI systems in the federal procurement process and support streamlining the process for AI systems and other software that have met federal standards
  • examine the impact of AI on professional content creators and publishers, particularly local news
  • develop legislation to address online child sexual abuse material (CSAM), including specifically AI-generated CSAM, alongside nonconsensual distribution of intimate images and other harmful deepfakes
  • consider legislation to ensure companies take reasonable steps in product design and operation to protect children from potential AI-powered harms online in the broader context of the mental health impact of social media
  • explore fraud-deterrence mechanisms, including public-private partnerships, with a focus on vulnerable populations including the elderly and veterans
  • continue developing a federal framework for testing and deploying autonomous vehicles across all modes of transportation to stay ahead of strategic competitors such as the Chinese Communist Party (CCP)
  • consider legislation to ban the use of AI for social scoring, noting the need to protect fundamental freedom in contrast to the technology's extensive use by the CCP
  • review whether other potential uses for AI should be either extremely limited or banned

Within the healthcare sector, the Working Group encourages standing committees to:

  • consider legislation that supports further deployment of AI in healthcare with appropriate safety measures to protect patients, including ensuring consumer protection, preventing fraud and abuse, and promoting the usage of accurate and representative data
  • support the National Institutes of Health (NIH) in developing and improving AI technologies, focusing on data governance and on making healthcare and biomedical data available for machine learning and data science research while protecting privacy
  • enable the U.S. Department of Health and Human Services (HHS), including the U.S. Food and Drug Administration (FDA) and Office of the National Coordinator for Health Information Technology, to provide a predictable regulatory structure for product developers
  • consider legislation to provide transparency for providers and the public about the use of AI in medical products and clinical support services, including model training data
  • consider policies to promote AI innovation to improve health outcomes and efficiencies, including examining the Centers for Medicare & Medicaid Services' reimbursement mechanisms, as well as guardrails to ensure accountability, appropriate use and inclusive application of AI

Elections and Democracy

In a shorter section that reflects both an absence of consensus and the difficulty of the underlying challenge, the Working Group encourages standing committees and AI developers and deployers to advance effective watermarking and digital content provenance for AI-related election content. The Working Group also encourages AI deployers and content providers to find ways to balance protections from objectively false AI-generated content on one hand and First Amendment rights on the other. The Working Group encourages states to review the U.S. Election Assistance Commission's AI Toolkit for Election Officials,7 and the Cybersecurity and Infrastructure Security Agency's Cybersecurity Toolkit and Resources to Protect Elections.8 Members of Congress remain significantly concerned how AI could create false narratives or images in the immediate lead-up to a presidential, congressional or state election that could tip the results, so this is an area where additional scrutiny is expected in advance of, and in particular following the use of AI for this type of content focused on impacting U.S. elections.

Privacy and Liability

Recognizing the rapid pace of technological development in the field and the resulting difficulty of assigning liability to AI companies and users, the Working Group encourages standing committees to assess the need for additional standards to hold AI developers and deployers accountable for harm caused by their products – or to hold users liable for harm caused by their actions. The Working Group also encourages committees to consider enforcement mechanisms. Additionally, the Working Group proposes that the standing committees look for ways to minimize the use of nonpublic personal information being stored in or used by AI systems. In general – and somewhat beyond the scope of an AI Roadmap – the Working Group supports a comprehensive federal data privacy law that would address data minimization, data security, consumer data rights, consent and disclosure, and data brokers.

Transparency, Explainability, Intellectual Property, and Copyright

The Working Group encourages standing committees to:

  • consider developing legislation requiring transparency for AI systems, including best practices for disclosure that a product uses AI
  • evaluate the need for best practices for the level of automation appropriate for a type of task and when a "human in the loop" may be necessary
  • review federal agency requirements for when their employees must be told about the development and deployment of AI in the workplace
  • consider the potential need for transparency related to data sets used to train AI models, including data sets that might contain sensitive personal data or copyrighted material
  • review forthcoming executive branch reports related to digital provenance
  • consider developing legislation incentivizing software and hardware providers to provide content provenance information and consider the need for legislation that requires or incentivizes online platforms to maintain access to that content provenance information, in addition to voluntary efforts, which the Working Group also supports
  • consider the potential need for legislation protecting against the unauthorized use of one's name, image, likeness and voice as it relates to AI, as well as considering the impacts on professional digital content creators, victims of nonconsensual distribution of intimate images, victims of fraud and other stakeholders.
  • review reports from the U.S. Copyright Office and U.S. Patent and Trademark Office on how AI impacts intellectual property law and take appropriate action to preserve U.S. leadership
  • consider legislation to create a national AI literacy campaign

Safeguarding Against AI Risks

The Working Group encourages companies to perform detailed testing and evaluation to understand potential harms and not to release AI systems that cannot meet industry standards.

The Roadmap encourages standing committees to:

  • consider a resilient regime that focuses on system capabilities, protects proprietary information and allows for continued innovation; the regime should tie governance efforts to the latest capabilities and respond regularly to changes in the AI landscape
  • support related efforts, particularly the development and standardization of risk testing and evaluation, including red-teaming, sandboxes and testbeds, commercial AI auditing standards, bug bounty programs, and physical and cybersecurity standards
  • investigate the policy implications of different ways to release AI system products, including the spectrum between closed and fully open-source models
  • develop a framework for when pre-deployment evaluation of AI models should be required
  • explore the potential need for an AI-focused Information Sharing and Analysis Center (ISAC) as an interface between commercial AI entities and the federal government to monitor AI risks
  • consider short-, medium- and long-term risks, recognizing that model capabilities and testing and evaluation capabilities will change and grow over time; where testing and evaluation are insufficient to directly measure capabilities, explore proxy metrics
  • develop legislation to advance R&D efforts addressing the risks posed by AI system capabilities, including by equipping AI developers, deployers and users to identify, assess and manage those risks

National Security

The Working Group encourages the Office of the Director of National Intelligence, DOD and DOE to work with commercial AI developers to prevent large language models and other frontier AI models from inadvertently leaking or reconstructing sensitive or classified information.

The Roadmap encourages standing committees to:

  • develop legislation bolstering the use of AI in U.S. cyber capabilities
  • work with the DOD and intelligence community to address federal national security workforce issues, including by developing AI-related career paths and training programs, providing resources and oversight for a digital workforce within the armed services, swiftly handling security clearances with a priority on AI-related clearances, and improving lateral and senior placement opportunities to expand the AI talent pathway
  • assess whether the DOD's policy on fully autonomous lethal weapons systems should be codified or if other measures such as notification requirements are needed
  • consider legislation to bolster efforts to monitor the development of AI and artificial general intelligence (AGI) by adversaries
  • better define AGI, characterize the likelihood of AGI development and the magnitude of risks posed by AGI development, and develop a policy framework based on that analysis
  • explore opportunities for leveraging AI to improve the management and risk mitigation of space debris
  • address, and mitigate where possible, the rising energy demand of AI systems
  • consider the recommendations of the National Security Commission on Emerging Biotechnology and the NSCAI regarding CBRN threats, including as they relate to preventing adversaries from developing an AI-enhanced bioweapon program
  • ensure BIS proactively manages export controls for critical technologies - such as semiconductors, biotechnology and quantum computing - and investigate whether there is a need for new authorities in this area
  • develop a framework for determining when, or if, export controls should be placed on powerful AI systems
  • develop a framework for determining when an AI system should be classified on the grounds that, if it were acquired by an adversary, it would be powerful enough that it would pose a grave risk to national security
  • ensure agencies are able to work with allies and international partners to advance bilateral and multilateral agreements on AI.
  • develop legislation to create or participate in international AI research institutes and partnerships
  • develop legislation to expand the use of modern data analytics and supply chain platforms by law enforcement agencies to combat the flow of illicit drugs
  • work with the executive branch to support the free flow of information across borders, protect against the forced transfer of American technology and promote open markets for digital goods exported by American creators and businesses through agreements that also allow countries to address concerns regarding security, privacy, surveillance and competition


1. Bipartisan Senate AI Working Group.


3. S.2714.

4. S.4178, Section 202.

5. S.4236.

6. S.2138.

7. AI Toolkit for Election Officials.

8. Cybersecurity Toolkit and Resources to Protect Elections.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More