10 April 2024

What The New Federal AI Rules Mean For Govt. Contractors



Fenwick logo
Fenwick provides comprehensive legal services to leading technology and life sciences companies — at every stage of their lifecycle — and the investors that partner with them. For more than four decades, Fenwick has helped some of the world's most recognized companies become and remain market leaders. Visit to learn more.
The Biden Administration recently issued its clearest articulation yet of how federal agencies can use AI—and no surprise: There are big implications for companies delivering...
United States Technology
To print this article, all you need is to be registered or login on

What You Need To Know

  • The Office of Management and Budget has issued long-awaited guidance on how federal agencies can use AI.
  • For companies delivering AI solutions to the government, that guidance raises new considerations including compliance and reporting, minimum practices to address certain risks, interoperability and usage rights, and code and data-sharing.
  • In many cases, agencies are required to adhere to these new guidelines by Dec. 1, 2024.

The Biden Administration recently issued its clearest articulation yet of how federal agencies can use AI—and no surprise: There are big implications for companies delivering AI solutions to federal government customers.

On March 28, the Office of Management and Budget issued memorandum M-24-10 ("OMB AI Memo") mandating responsible AI development and deployment across federal agencies, with a focus on understanding and tackling large societal challenges, such as food insecurity, climate change, and threats to democracy. The OMB AI Memo gives effect to several provisions in Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence from October 30, 2023 ("AI EO"), especially Section 10 Advancing Federal Government Use of AI.

Many sections of the OMB AI Memo mandate requirements that only apply to agencies, such as designating a Chief AI Officer ("CAIO"). However, companies delivering or intending to deliver AI-based solutions to federal agencies can learn about—and begin preparing for—requirements that are eventually going to apply to them.

What's New in the Final OMB AI Memo?

Our prior article outlined the OMB's December draft of the memorandum, but the final version issued Thursday puts a stronger focus on using AI for the public good and includes a new section mandating agencies share their AI code, models, and data in a manner that facilitates re-use and collaboration, including with the public.

In the final OMB AI Memo, agencies must often provide a convenient mechanism for individuals to opt out of AI functionality in favor of a human alternative—so long as it doesn't impose discriminatory burdens on access to a government service.

Another addition urges agencies to consider carbon emissions and resource consumption from data centers that support power-intensive AI.

How Contractors Can Prepare for the Coming Requirements

The following summary of key provisions includes expected requirements and suggestions for taking immediate action.

Pay attention to the AI use case. The OMB AI Memo focuses on minimum practices to help manage the associated risk for "rights-impacting AI" and "safety-impacting AI," for which OMB has issued new definitions.

"Rights-impacting AI" refers to AI whose output serves as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on any of that individual's or entity's:

  • Civil rights, civil liberties, or privacy, including but not limited to freedom of speech, voting, human autonomy, and protections from discrimination, excessive punishment, and unlawful surveillance
  • Equal opportunities, including equitable access to education, housing, insurance, credit, employment, and other programs where civil rights and equal opportunity protections apply
  • Access to or the ability to apply for critical government resources or services, including healthcare, financial services, public housing, social services, transportation, and essential goods and services

"Safety-impacting AI" refers to AI whose output produces an action or serves as a principal basis for a decision that has the potential to significantly impact the safety any of the following:

  • Human life or well-being on the individual or community level, including loss of life, serious injury, bodily harm, biological or chemical harms, occupational hazards, harassment or abuse, or mental health
  • Climate or environment, including irreversible or significant environmental damage
  • Critical infrastructure, including the critical infrastructure sectors defined in Presidential Policy Directive 2159 or any successor directive and the infrastructure for voting and protecting the integrity of elections
  • Strategic assets or resources, including high-value property and information marked as sensitive or classified by the federal government

Some use cases are automatically presumed to fall within these definitions. These are listed in the OMB AI Memo in Appendix I.

Prepare to make your case. Agencies are required to assess whether the AI meets the definitions of safety-impacting AI or rights-impacting AI. For AI that is automatically presumed to be safety- or rights-impacting, the OMB AI Memo permits the agency's CAIO to decide that the AI does not match either definition and is therefore not subject to the minimum practices. That determination can also be reversed, however.

Anticipating those assessments and possible determinations, companies can immediately start working on a range of strategies related to product documentation, descriptions, and acceptable use policies. Agencies are expected to release information about AI use case determinations and waivers no later than Dec. 1, 2024, and companies should monitor for these disclosures.

Prepare to comply with the minimum practices. The OMB AI Memo outlines the minimum practices that agencies must undertake when using applications or components that are deemed safety-impacting AI and rights-impacting AI. Most agencies will be required to adhere to these minimum practices by Dec. 1, 2024. At that time, agencies must stop using any non-compliant AI in their operations and, going forward, must ensure these practices are in place prior to using AI. There are provisions for limited exemptions, exclusions, and waivers.

These minimum practices involve:

  • Completing an AI impact assessment
  • Testing the AI for performance in a real-world context
  • Independently evaluating the AI
  • Conducting ongoing monitoring while using the AI
  • Regularly evaluating risks from using AI
  • Mitigating emerging risks to rights and safety
  • Ensuring adequate human training and assessment
  • Providing additional human oversight, intervention, and accountability as part of decisions or actions that could result in a significant impact on rights or safety
  • Providing public notice and plain-language documentation

For rights-impacting AI, the following are additional minimum practices:

  • Identifying and assess AI's impact on equity and fairness, and mitigate algorithmic discrimination when it is present
  • Consulting and incorporating feedback from affected communities and the public
  • Conducting ongoing monitoring and mitigation for AI-enabled discrimination
  • Notifying negatively affected individuals
  • Maintaining human consideration and remedy processes
  • Maintaining options to opt-out for AI-enabled decisions

Facilitate agency assessment of the AI solution. Based on the OMB AI Memo's guidance to agencies for procuring AI solutions, here are some tips for preparing for evaluations:

  • Obtain adequate documentation of the AI solution's capabilities
  • Produce guidelines on how the AI solution is intended to be used and be sure to include known limitations
  • Have adequate information about the provenance of the data used to train, fine-tune, or operate the AI
  • Be ready to back up claims about the AI offering's effectiveness and the risk-management measures in place
  • Anticipate post-award monitoring and taking advantage of incentives to continuously improve the AI solution

Plan for interoperability and multi-cloud. Consistent with federal procurement policy and law, the OMB AI Memo supports promoting competition and avoiding vendor lock-in by encouraging agencies to promote interoperability, such as requiring an AI solution to work across multiple cloud environments.

Expect to grant data use rights. The AI EO and the OMB AI Memo emphasize that data is a critical asset for federal agencies, and the OMB Memo tasks agencies with ensuring contracts for AI:

  • Grant the government sufficient rights to data and improvements to data, including rights to make certain custom code and data for training publicly available
  • Protect federal information used by companies in the development and operation of AI for the federal government such that it "cannot be subsequently used to train or improve the functionality of the vendor's commercial offerings without express permission from the agency"

Expect Agencies to Share AI Code and Data

The final OMB AI Memo adds an emphasis on agencies sharing and collaborating on use of AI. For example:

  • Agencies are encouraged to share and release AI code and models and maintain that code as open-source software on a public repository.
  • Data used to develop and test AI is likely to constitute a "data asset" under the Open, Public, Electronic, and Necessary Government Data Act, potentially requiring those data assets to be released publicly.
  • As noted above, agencies procuring custom-developed AI code or data for training and testing AI should ensure they have rights to share and release that code and data to the public.

Don't forget about other applicable requirements. Over and above the OMB AI Memo requirements, federal agencies are likely to be early adopters of the requirements set forth in the AI EO, such as those related to dual-use foundation models and synthetic content.

Despite the OMB AI Memo calling for harmonization across the agencies and the use of standardized templates and forms, expect agency-specific requirements, especially from those with defense and national security missions. (Use of AI on a national security system is specifically excluded from the OMB AI Memo.)

The OMB AI Memo reminds agencies to procure AI solutions consistently with the Constitution and applicable laws, regulations, and policies, including those addressing privacy, confidentiality, intellectual property, cybersecurity, human and civil rights, and civil liberties.

Companies should plan ahead by mapping existing legal requirements to their AI solution while monitoring for additional requirements especially for generative AI and dual-use foundation models.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More