Key Takeaways:

  • The Office of Management and Budget (OMB) published a memorandum for federal agencies to provide guidance in implementing the recent Artificial Intelligence (AI) executive order.
  • The memorandum is focused on three categories: (1) strengthening AI governance; (2) advancing responsible AI innovation; and (3) managing risks from the use of AI.
  • The memorandum outlines rights-impacting and safety-impacting situations where minimum practices for AI must be followed by federal agencies.
  • Interested parties can comment on the draft memorandum through December 5, 2023 by clicking here.

Following the landmark October 30, 2023 executive order on artificial intelligence (AI), the White House Office of Management and Budget (OMB) issued a draft memorandum to federal agency heads for the use of AI within their departments. The memorandum directs agencies to establish new requirements and guidance for AI governance, innovation, and risk management, particularly as it relates to safety and privacy rights.

The memorandum is broken down into three categories for federal agencies to begin operationalizing the executive order and these new requirements. The framework is designed to provide direction to agencies in how they use and manage the risks of AI across their departments.

Strengthening AI Governance

To improve coordination, oversight, and AI stewardship, federal departments are directed to designate a Chief AI Officer within 60 days of the issuance of this memorandum. This could be an existing official, such as a Chief Technology Officer or Chief Data Officer, or it can be a new position having a primary role in the coordination, innovation, and risk management of an agency's use of AI. Because AI cuts across various technical and policy areas, this individual must maintain awareness of all agency AI activities, identify and remove barriers to the responsible use of AI, and advocate within the agency and to the public on the opportunities and benefits of AI to the agency's mission. It also directs each agency to convene AI governance bodies and draft compliance plans consistent with federal requirements.

Advancing Responsible AI Innovation

To improve the responsible application of AI, federal departments are required to develop, within one year, a department-wide AI strategy. This strategy will articulate the agency's plan to improve its existing AI infrastructure, AI workforce, its capacity to develop and use AI successfully, and its ability to govern AI and manage risks. The memorandum also provides recommendations to reduce barriers to the responsible use of AI, including barriers related to IT infrastructure and cybersecurity.

Managing Risks from the Use of AI

The memorandum outlines risk management protocols critical to the safe and successful implementation of AI across agencies. OMB's memo requires agencies to follow minimum practices when using AI that could impact the rights or safety of the public.

The memorandum describes various safety-impacting situations where minimum practices must be followed with regards to AI. This includes physical movements, including in human-robot teaming, within a school, medical, or law enforcement setting; the application of kinetic force, delivery of biological or chemical agents, or delivery of potentially damaging electromagnetic impulses; access to or security of government facilities; and the transport, safety, design, or development of hazardous chemicals or biological entities or pathways, among other scenarios.

Rights-impacting situations include fundamental issues like decisions to block, remove, hide, or limit the reach of protected speech, and law enforcement or surveillance-related risk assessments about individuals. More health specific rights-impacting situations include: decisions regarding medical devices, medical diagnostic tools, clinical diagnosis and determination of treatment, medical or insurance health-risk assessments, drug-addiction risk assessments and associated access systems, mental-health status detection or prevention, systems that flag patients for interventions, public insurance care-allocation systems, or health-insurance cost and underwriting processes. It also covers decisions regarding access to, eligibility, and revocation of government benefits or services, through biometrics, to IT systems for accessing services for benefits or detecting fraud.

To manage these risks, beginning in August 2024, agencies must document the implementation of minimum practices and follow certain practices before using new or existing covered safety-impacting or rights-impacting AI. These steps include creating an impact AI assessment documenting the intended purpose and potential risks of using such AI, testing the AI for performance in a real-world context, and independently evaluating the AI.

Agencies must also follow certain practices while using new or existing covered AI, which include ongoing monitoring and human review thresholds; taking steps to mitigate emerging risks to rights and safety; ensuring adequate human training and oversight of AI operators; and providing public notice and plain language documentation through the AI use case inventory, which is generally accessible documentation of the systems' functionality to the AI's users and the public.

Next Steps

This memorandum marks another key step in operationalizing the government's approach to AI and rapidly putting it into action. These recent steps from the Biden administration recognize how far along AI is in across many industries, and the need for the federal government to scale up its understanding and capacity to use it responsibly and manage its risks.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.