The U.S. Department of Commerce's National Institute of Standards and Technology ("NIST") last week released an Artificial Intelligence Risk Management Framework ("AI RMF 1.0"). Calling it a guidance document for voluntary use by organizations in designing, developing, deploying, or using AI systems, the framework can be used to contextualize and manage the potential risks of harm posed by AI systems, technologies, and practices in all areas where they may be used.

AI-related risk management is an increasingly important issue. Documented harms traceable to AI technologies have been widely reported and threaten to undermine people's trust in AI. Companies that make AI systems, and those that use AI to automate decisions across their organizations or enterprises, may have policies and procedures for evaluating general corporate risks from AI. But with several states and localities implementing laws requiring data-centric risk assessments, data privacy impact assessments, and bias audits around data-based technologies like AI (including New York City's Law No. 144 that requires audits by those who use automated employment decision tools), and with Congress poised to consider national data privacy legislation containing economy-wide risk provisions, it is important for companies and organizations that make or use AI to review to ensure their approaches to risk management around AI are comprehensive and comply with applicable laws and regulations.

The AI RMF has been in development by NIST for over a year, following the passage of the AI Initiative Act of 2020 (part of the National Defense Authorization Act of 2021). In remarks introducing the AI RMF on January 26, 2023, Don Graves, Deputy Secretary of Commerce, said development of the RMF had been an urgent priority for NIST. Dr. Laurie E. Locascio, Under Secretary of Commerce for Standards and Technology and Director of NIST, echoed his sentiments. She said cultivating public trust in AI is key to driving innovation, and AI risk management can enforce positive practices by helping those who make and use AI systems think critically about the potential impacts from those systems. The RMF, she said, converts AI-specific principles, such as transparency, accountability, and explainability, into practice by providing a consensus-driven methodology for conducting AI risk assessments and a lexicography to help communicate risks to others. The AI RMF, it is hoped, will help companies and others operationalize AI governance. Rep. Frank Lucas, Chairman of the House Science, Space, and Technology Committee, along with ranking member Rep. Zoe Lofgren, and the White House's Dr. Alandra Nelson, Office of Science, Technology and Policy, also addressed the need for the AI RMF today.

The AI RMF's succinct Map, Measure, and Manage approach to AI self-governance makes it one of the most useful risk management protocols produced to date. Its flexible approach should appeal to both small and large businesses and others looking for ways to purposefully identify and mitigate the potential risks of harm from AI before they can occur.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.