The National Institute of Standards and Technology (“NIST”) issued its initial draft of the “ AI Risk Management Framework” (“AI RMF”), which aims to provide voluntary, risk-based guidance on the design, development, and deployment of AI systems.  NIST is seeking  public comments on this draft via email, at AIframework@nist.gov, through April 29, 2022.  Feedback received on this draft will be incorporated into the second draft of the framework, which will be issued this summer or fall.

In particular, NIST has requested feedback on the following questions:

  • Whether the AI RMF appropriately covers and addresses AI risks, including with the right specificity for various use cases;
  • Whether the AI RMF is flexible enough to serve as a continuing resource considering the evolving technology and standards landscape;
  • Whether the AI RMF enables decisions about how an organization can increase understanding of, communication about, and efforts to manage AI risks;
  • Whether the functions, categories, and subcategories are complete, appropriate, and clearly stated;
  • Whether the AI RMF is in alignment with or leverages other frameworks and standards such as those developed or being developed by IEEE or ISO/IEC SC42;
  • Whether the AI RMF is in alignment with existing practices, and broader risk management policies;
  • What might be missing from the AI RMF; and
  • Whether the soon to be published draft companion document citing AI risk management practices is useful as a complementary resource and what practices or standards should be added.

The current draft of the AI RMF notes that “AI trustworthiness and risk are inversely related,” and as such, organizations should aspire to develop and deploy AI systems with characteristics of “trustworthiness.”  The AI RMF uses a “three-class taxonomy” to define the characteristics of a “trustworthy” AI system:

  • Technical characteristics refer to factors that are “under the direct control of AI system designers and developers” and are generally measurable through statistical methods.
  • Socio-technical characteristics refer to factors relating to how AI systems are perceived in society.  As such, these characteristics are not quantifiable through an automated process and require “human judgment” to measure.
  • Guiding principles refer to broader, qualitative social norms that should inform the way that AI systems are developed and deployed.

Looking ahead, after incorporating comments, the final draft of the AI RMF will include three sections that highlight specific actions that organizations can take to manage risk:

  • The “Core” section, which is included in this draft, describes a broad series of actions that all organizations can take to manage AI risks.
  • The “Profiles” section, which is not included in this draft, will highlight case studies of managing AI risk in specific contexts.  NIST is actively seeking “contributions of AI RMF profiles” during this comment period that it could include in the next draft of the AI RMF.

The “Practice Guide,” which is not yet published and will be posted separately online, will include additional risk management examples and practices.  NIST is currently seeking comments on the types of practices and standards that should be included in this Practice Guide.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.