On March 28, 2024, the Office of Management and Budget (OMB) releasedMemorandum M-24-10,Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence(Memo), updating and implementing OMB's November 2023 proposed memorandum of the same name. The Memo directs agencies "to advance AI governance and innovation while managing risks from the use of AI in the Federal Government." In the Memo, OMB focuses on three major areas – strengthening AI governance, advancing responsible AI innovation, and managing risks from the use of AI.

Scope

The Memo addresses only a subset of AI risks that are directly tied to agencies' use of AI products—those that threaten the safety and rights of the public due to reliance on AI outputs in agency decision-making or actions. The Memo does not address issues that are present in any automated or software systems regardless of whether AI is used (enterprise risk management, information resources management, privacy, accessibility, federal statistical activities, IT, or cybersecurity), and it does not supersede any more general policies on these matters that may also apply to AI. The Memo also does not apply to AI in National Security Systems.

Strengthening Governance

The Memo outlines the ways in which agencies will be responsible for managing their use of AI. All agencies will be required to designate a Chief AI Officer (CAIO) and convene senior officials to coordinate and govern issues raised by the use of AI. The CAIO will be responsible for coordinating agency use of AI, promoting AI innovation, managing risks from the use of AI, and carrying out agency responsibilities regarding AI. Agencies must also submit a compliance plan and AI use case inventory.

Advancing Responsible AI Innovation

The Memo encourages responsible advancement of AI innovation within federal agencies. Each agency will be responsible for identifying and removing barriers to the responsible use of AI, as well as maturing AI integration throughout the agency. This will include improving IT infrastructure to be able to handle AI training and inference; developing adequate infrastructure and capacity to share, curate, and govern agency data for use in AI modeling; updating cybersecurity; and integrating the "potential beneficial uses of generative AI in their missions."

Agencies are also instructed to prioritize the recruiting, hiring, developing, and retaining of talent in AI and AI-enabling roles. This should include designating an AI Talent lead and providing resources to employees for training and development of AI talent.

The Memo notes the importance of AI sharing and collaboration in order to advance innovation. Agencies are required to proactively share their custom-developed code and models as open source software on public repositories, when possible, or portions of their code and models if parts cannot be shared. They will also be required to release all data used to develop and test their AI products. When procuring custom AI code, training data, and enrichment of existing data, agencies are also encouraged to obtain the rights necessary for sharing and public release of the procured products and services.

Finally, agencies are instructed to harmonize AI management requirements across agencies to create efficiencies and opportunities for sharing resources. At a minimum, this will include shared templates and formats, sharing best practices, sharing technical resources, and highlighting examples of successful AI use within the agency.

Managing Risks from the Use of AI

The Memo's third area of focus is to improve AI risk management in government agencies, focusing on so called "safety-impacting" and "rights-impacting" uses of AI. The Memo requires all agencies that utilize safety- and rights-impacting AI products to implement required risk management practices, and to terminate noncompliant uses of AI, by December 1, 2024. The Memo does include limited exclusions, and it includes extensions for agencies that cannot meet the December deadline.

Required Practices for all Safety- and Rights-Impacting AI

Under the risk management requirements of the Memo, before any federal agency can use a safety- or rights-impacting AI, it is required to complete an AI impact assessment. The AI impact assessment must:

  1. state the intended purpose of the AI and its expected benefit;
  2. identify the potential risk using the AI and any mitigation measures beyond the minimum practices outlined in the memo; and
  3. evaluate the quality of the data used in the AI design and development.

The agencies must also test the AI for real-world performance to ensure that it will work for its intended purpose. In the event the AI's expected benefits do not outweigh its risks, even after attempting to mitigate the risk, OMB is clear that the agencies should not use the AI.

After an agency begins using a safety- or rights-impacting AI product, the agency must conduct ongoing monitoring (including human reviews), regularly evaluate risks, and mitigate any emerging risks. The Memo mandates that the agencies ensure that their staff is adequately trained to assess and oversee the AI, as well as provide additional human oversight and accountability when the AI is not permitted to act due to a rights- or safety-impacting issue that requires human mitigation. The agencies are required to provide timely public notice and plain-language documentation for any safety- or rights-impacting AI in use, preferably before the AI takes an action that impacts the individual.

Additional Practices for Rights-Impacting AI

The use of any rights-impacting AI will require additional safeguards. Before implementing any rights-impacting AI, agencies must first identify and assess the AI's impact on equity and fairness and mitigate any algorithmic discrimination when it is present. Specifically, the Memo mandates that the agencies' assessments:

  1. identify in the AI impact assessment when the AI is using data that contains information about federally protected classes (e.g., race, age, sex, etc.);
  2. analyze whether the AI in real-world context results in significant disparities in the program's performance across demographic groups;
  3. mitigate the disparities that perpetuate discrimination; and
  4. cease use of the AI for agency decision-making if the agency cannot mitigate the risk of discrimination against protected classes.

The Memo also requires that agencies consult and incorporate feedback on the use of the AI from all affected communities and the public. OMB is clear that if while assessing the feedback an agency determines that the use of the AI causes more harm than good, then the agency should stop using the AI.

After the implementation of any rights-impacting AI, the Memo directs agencies to conduct ongoing monitoring and mitigation of discrimination. If mitigation is not possible, then agencies are required to safely discontinue the use of the AI functionality. Agencies must also notify individuals when the use of the AI results in an adverse decision against the individual. In such cases, the agency is required to provide timely human review and, if appropriate, provide a remedy to the use of AI if the impacted person would like to appeal or contest the AI's negative impact. Agencies must also offer an opt-out option for the AI-enabled decisions for those individuals who would prefer human review and an appeal process for individuals negatively impacted by the use of AI.

Managing Risks in Federal Procurement of AI

The Memo includes additional guidance for the responsible procurement of AI. First, agencies are encouraged to ensure that the procured AI is compliant with all laws and regulations considering privacy, intellectual property, cybersecurity, and civil rights and liberties. Agencies are also expected to ensure transparent and adequate AI performance from any vendor. In support of this requirement, the Memo recommends agencies obtain adequate documentation to assess the AI's capabilities and its known limitations; obtain information regarding the data used to train, fine-tune, and operate the AI; regularly evaluate federal contractors' claims regarding the effectiveness of their AI offerings; consider contract provisions that incentivize continuous improvement of AI; and require post-award monitoring of AI.

The Memo encourages agencies to ensure competition for Federal AI procurement practices. Agencies are encouraged to obtain adequate data rights, including any improvements to the data to allow the agency to continue design, development, testing, and operation of the AI system. The memo requires agencies to ensure that AI developers and their vendors are not relying on test data to train AI systems.

When procuring Generative AI, the Memo encourages agencies to include risk management requirements. As generally required when procuring goods or services, the Memo encourages agencies to consider the AI system's impact to the environment, including considering carbon emissions and resource consumption from supporting data centers.

Definitions and Examples of Safety- and Rights-Impacting AI.

The Memo defines safety-impacting AI as AI "whose output produces an action or serves as a principal basis for a decision that has the potential to significantly impact the safety" of (1) human life, (2) the climate or environment, (3) critical infrastructure, or (4) strategic assets or resources. Memo Appendix I further describes AI purposes that are presumed to be safety-impacting, including:

  • Safety-critical functions of dams, electrical grids, traffic control, fire safety systems, nuclear reactors
  • Physical movements of robots or robotic systems
  • Autonomously or semi-autonomously moving vehicles
  • Controlling industrial emissions and environmental impacts
  • Carrying out the medically relevant functions of medical devices
  • Controlling access to or security of government facilities

The Memo defines rights-impacting AI as AI "whose output serves as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect" on (1) civil rights, (2) equal opportunity, or (3) access to critical government resources or services. Appendix I further describes AI purposes that are presumed to be rights-impacting, including:

  • Blocking, removing, hiding, or limiting protected speech
  • Law enforcement contexts including risk assessment, identification, tracking, or monitoring of individuals
  • Education contexts including plagiarism detection, admissions, or disciplinary decisions
  • Replicating a person's likeness or voice without express consent
  • Screening tenants in the context of housing, valuations, mortgage underwriting, or insurance
  • Determining the terms or conditions of employment, including screening, hiring, promotion, performance management, or termination

Key Takeaways

The OMB's final Memo continues the trend toward increasing AI accountability and the implementation of risk-based frameworks for AI assessment and governance. The Memo is a significant step forward and an increase in the sophistication of the government's approach to managing its use of AI systems. The Memo can be expected to influence new regulations around the development, procurement, and use of AI in general, at both the state and federal levels.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.