ARTICLE
4 November 2024

Biden Administration Issues First-Ever National Security Memorandum On Artificial Intelligence

MB
Mayer Brown

Contributor

Mayer Brown is a distinctively global law firm, uniquely positioned to advise the world’s leading companies and financial institutions on their most complex deals and disputes. We have deep experience in high-stakes litigation and complex transactions across industry sectors, including our signature strength, the global financial services industry.
On October 24, 2024, President Biden issued the first-ever National Security Memorandum (NSM) on artificial intelligence (AI), fulfilling another directive (subsection 4.8)...
United States Technology

On October 24, 2024, President Biden issued the first-ever National Security Memorandum (NSM) on artificial intelligence (AI), fulfilling another directive (subsection 4.8) set forth in the Administration's Executive Order on AI and outlining how the federal government intends to approach AI national security policy. The NSM also includes a classified annex, which addresses sensitive national security issues. The release of the NSM follows the Biden Administration's other recent national security-focused actions on AI, including the Department of Commerce's proposed rule to institute mandatory reporting requirements for developers of powerful AI models (see our Legal Update on the proposal), and its interim final rule issuing new export controls on advanced semiconductor manufacturing equipment, among other technologies (see our Legal Update on the final rule).

The development of the NSM is based on the fundamental premise that "advances at the frontier of AI will have significant implications for national security and foreign policy in the near future."1 With that in mind, the NSM directs several actions to be taken by the federal government to: (1) ensure that the United States leads the world's development of safe, secure, and trustworthy AI; (2) harness cutting-edge AI technologies to advance the United States' national security mission; and (3) advance international consensus and governance around AI. While the NSM focuses on actions to be taken by the federal government, it promises to have significant implications for private sector entities as they develop and deploy powerful AI models.

In this Legal Update, we summarize key provisions and directives of the NSM.

Summary of the National Security Memorandum

The NSM provides three primary objectives and corresponding directives with respect to AI and national security.

1. Lead the world's development of safe, secure, and trustworthy AI: To maintain and expand US leadership in AI development, the NSM identifies key policies including: promoting progress and competition in AI development; protecting industry, civil society, academia, and related infrastructure from foreign intelligence threats; and developing technical and policy tools to address potential security, safety, and trustworthiness risks posed by AI. Key directives in this area include:

  • The Department of State (DOS), Department of Defense (DOD), and Department of Homeland Security (DHS) shall use all available legal authorities to attract and facilitate the immigration of foreign persons with relevant technical expertise who would improve US competitiveness in AI and related fields.
  • Several agencies—including the Department of Commerce (DOC), DOD, and Department of Energy (DOE)—shall coordinate their efforts, plans, investments, and policies to facilitate and encourage the development of sophisticated AI semiconductors, AI-dedicated computational infrastructure, and other AI-enabling infrastructure (e.g., clean energy, power transmission fiber data links, etc.).
  • The Office of the Director of National Intelligence (ODNI), in coordination with other agencies, shall identify critical nodes in the AI supply chain and develop a list of ways in which these nodes could be disrupted or compromised by foreign actors. These agencies shall take steps to reduce such risks.
  • The Committee on Foreign Investment in the United States (CFIUS) "shall, as appropriate, consider whether a covered transaction involves foreign actor access to proprietary information on AI training techniques, algorithmic improvements, hardware advances, critical technical artifacts (CTAs), or other proprietary insights that shed light on how to create and effectively use powerful AI systems."
  • DOC, acting through the AI Safety Institute (AISI) and National Institute of Standards and Technology (NIST), shall serve as the primary federal government point of contact with private sector AI developers to facilitate voluntary testing of dual-use foundation models. DOC shall establish a capability to lead this testing and issue guidance and benchmarks for AI developers on how to test, evaluate, and manage risks arising from these models. AISI shall submit a report to the President summarizing findings of its voluntary testing and share results with the developers of such models.
  • The National Security Agency (NSA) "shall develop the capability to perform rapid systematic classified testing of AI models' capacity to detect, generate, and/or exacerbate offensive cyber threats[,]" and DOE shall do the same with regards to "nuclear and radiological risks."
  • DOE, DHS, and AISI shall coordinate to develop a roadmap for evaluations of AI models' "capacity to generate or exacerbate deliberate chemical and biological threats[.]" DOE shall develop a pilot program to establish the capability to conduct classified testing in this area and other agencies shall support efforts to use AI to enhance biosafety and biosecurity.
  • DOD, DHS, the Federal Bureau of Investigation, and NSA "shall publish unclassified guidance concerning known AI cybersecurity vulnerabilities and threats; best practices for avoiding, detecting, and mitigating such issues during model training and deployment; and the integration of AI into other software systems."

2. Responsibly harness AI to achieve national security objectives: To further integrate AI into US national security functions, the NSM identifies key policies, including adapting partnerships, policies, and infrastructure to enable effective and responsible use of AI; and developing robust AI governance and risk management policies. Key directives in this area include:

  • DOD and ODNI shall establish a working group to address issues involving procurement of AI by DOD and Intelligence Community (IC) elements. The working group shall provide recommendations to the Federal Acquisition Regulatory Council (FARC) regarding changes to existing regulations and guidance, in order to accelerate and simplify the AI procurement process.
  • DOD and ODNI shall engage with private sector stakeholders, including AI technology and defense companies, to identify and understand emerging AI capabilities.
  • Heads of agencies shall monitor, assess, and mitigate risks directly tied to their agency's development and use of AI, including risks tied to physical safety, privacy, discrimination and bias, transparency, accountability, and performance.
  • Heads of agencies that use AI as part of a national security system (NSS) shall issue or update guidance on AI governance and risk management for NSS.

3. Foster a stable, responsible, and globally beneficial international AI governance landscape: US international engagement on AI "shall support and facilitate improvements to the safety, security, and trustworthiness of AI systems worldwide; promote democratic values, including respect for human rights, civil rights, civil liberties, privacy, and safety; prevent the misuse of AI in national security contexts; and promote equitable access to AI's benefits." To that end:

  • The Department of State, in coordination with other agencies, shall "produce a strategy for the advancement of international AI governance norms in line with safe, secure, and trustworthy AI, and democratic values, including human rights, civil rights, civil liberties, and privacy."

Conclusion

The scope of the NSM is not limited to the implementation of AI in the national security context. It also considers an expansive AI supply chain—including not just semiconductors and computing equipment, but also energy and power generation—and the effects of AI for commercial use as being vital to US national security. With that framing in mind, the NSM has significant implications not just for AI developers and defense contractors, but also other sectors such as energy and infrastructure. In addition, the NSM makes clear that federal national security policy for AI is likely to implicate a broad range of issues in the years ahead, including such diverse topics as immigration, foreign investment, federal research, public-private collaboration, government contracting, and supply chain security.

Footnote

1 See the White House Fact Sheet on the NSM.

Visit us at mayerbrown.com

Mayer Brown is a global services provider comprising associated legal practices that are separate entities, including Mayer Brown LLP (Illinois, USA), Mayer Brown International LLP (England & Wales), Mayer Brown (a Hong Kong partnership) and Tauil & Chequer Advogados (a Brazilian law partnership) and non-legal service providers, which provide consultancy services (collectively, the "Mayer Brown Practices"). The Mayer Brown Practices are established in various jurisdictions and may be a legal person or a partnership. PK Wong & Nair LLC ("PKWN") is the constituent Singapore law practice of our licensed joint law venture in Singapore, Mayer Brown PK Wong & Nair Pte. Ltd. Details of the individual Mayer Brown Practices and PKWN can be found in the Legal Notices section of our website. "Mayer Brown" and the Mayer Brown logo are the trademarks of Mayer Brown.

© Copyright 2024. The Mayer Brown Practices. All rights reserved.

This Mayer Brown article provides information and comments on legal issues and developments of interest. The foregoing is not a comprehensive treatment of the subject matter covered and is not intended to provide legal advice. Readers should seek specific legal advice before taking any action with respect to the matters discussed herein.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More