The Board Member's Oversight Of AI Risk – Moving From Middle To Modern English

A
AlixPartners
Contributor
AlixPartners is a results-driven global consulting firm that specializes in helping businesses successfully address their most complex and critical challenges.
"In Chaucer's Middle English, please summarize the sentiment on Capitol Hill this spring as it relates to the proliferation of GenAI …"
United States Corporate/Commercial Law
To print this article, all you need is to be registered or login on Mondaq.com.

"Whan that Aprill with his shoures soote
The droghte of March hath perced to the roote,
And bathed every veyne in swich licour
Of which vertu engendred is the flour,
Whan Zephirus eek with his sweete breeth
Inspired hath in every holt and heeth
The tendre croppes, and the yonge sonne
Hath in the Ram his halve cours yronne,
And smale foweles maken melodye,
That slepen al the nyght with open ye..."

-Geoffrey Chaucer, The Canterbury Tales, 1.1 General Prologue

GenAI Prompt: "In Chaucer's Middle English, please summarize the sentiment on Capitol Hill this spring as it relates to the proliferation of GenAI ..."

GenAI Response: "There are too many press releases to summarize and I have not yet been properly trained in Middle English."

Artificial Intelligence, including advanced forms of Generative Artificial Intelligence ("GenAI"), has been a headline grabbing topic for upwards of a year and arguably served as the driving force in keeping the NASDAQ Composite trading near all-time highs. GenAI can be thought of as "a machine-learning model that is trained to create new data, rather than making a prediction about a specific dataset. A generative AI system is one that learns to generate more objects that look like the data it was trained on." 1 It is a revolutionary technology akin to the release of mobile telephones in the 70's or better still, the introduction of automobiles a mere 500-years after Chaucer penned his literary classic. It appears that the application of GenAI spans well beyond the realm of assisting struggling high school English students across our nation in achieving their 'Gentleman's C' (...or is it now a 'Gentleman's B' with the advent of these wildly powerful technologies?).

Regulators and lawmakers alike have begun to take notice. On the tail end of FTC Chair Lina Khan's joint statement on AI, 2 we heard of the White House's piqued interest in this technology during the summer of 2023 when the Biden-Harris Administration successfully secured voluntary commitments from some of the world's leading technology companies to manage the risks posed by artificial intelligence. Companies like Amazon, Google, Nvidia, Microsoft, and Salesforce committed to ensuring the safety of their artificial intelligence products before public introduction, building systems that put security first, and earning the public's trust before, during, and after the development and release of said products. 3

Fast forward several months, and one might quickly come across the Biden-Harris Administration's Executive Order on artificial intelligence systems (Google's AI-augmented search capabilities might find it even faster). On October 30th, 2023, an Executive Order was released that:

"...establishes new standards for AI safety and security, protects Americans' privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more." 4

This represented a natural progression from the voluntary commitments secured months earlier, but in no way indicated that the regulatory landscape would normalize for companies to wrap their arms around how best to mitigate the risks associated with their own use of GenAI technologies. Like the technology, we expect the regulatory landscape to continue to evolve.

And on March 7, 2024, the U.S. Deputy Attorney General, Lisa Monaco, welcomed attendees at the American Bar Association's 39th National Institute on White Collar Crime conference and explained that:

"Where AI is deliberately misused to make a white-collar crime significantly more serious, our prosecutors will be seeking stiffer sentences — for individual and corporate defendants alike. And compliance officers should take note. When our prosecutors assess a company's compliance program — as they do in all corporate resolutions — they consider how well the program mitigates the company's most significant risks. And for a growing number of businesses, that now includes the risk of misusing AI." 5

Business executives, legal teams, compliance officers, and even directors of the corporate board will need to be methodical in their approach to navigating the risks brought about by their company's inevitable choice to leverage GenAI in the coming months, years, and presumably decades.

GenAI Prompt: "There are views that companies will be liable for the things their AIs say and do. 6 I'm an independent director at a Fortune 500 company that is seriously considering rolling out multiple GenAI programs in parallel. What types of questions should I be asking our risk management and IT leadership to reassure my fellow board members that we are 'doing enough' to reasonably avoid a worst-case scenario?"

GenAI Response: "There certainly are...the data that I was trained on would suggest conducting a risk assessment that is designed to answer several critical questions...please hold while I synthesize these questions from my training data..."

As the GenAI model so aptly laid out, developing a risk assessment framework to periodically assess risk levels from the inception of a company's GenAI product development program represents a tangible means of cutting through the complexity that so often clouds risk-based decision making. A risk assessment provides evidence the company is focused on measuring and managing its risk.

Risk assessments take on many forms, but there are three critical components of a risk assessment that, when consistently applied, help to compartmentalize a company's often highly complex risk environment and measure progress. A risk assessment requires (1) the identification of the inherent risks present within a company's operations, in this case a company's GenAI program and its use case, (2) the effectiveness of a company's existing safeguards in addressing those inherent risks, and (3) the remaining residual risks after the application of those safeguards. For example, a hotel that installs a swimming pool creates inherent risk relating to drowning; lifeguards, locks, and safety equipment are safeguards against the risk; but some risk will remain regardless of the safeguards applied. More specifically:

  • Inherent risk is an assessed level of raw or untreated risk. It is the natural level of risk inherent in a GenAI program without doing anything to reduce the likelihood, or mitigate the severity of, a mishap. Inherent risk is determined by obtaining detailed knowledge of the GenAI program and broader regulatory environment with a view that no controls are in place.
  • Safeguard effectiveness is a measure of how well the inherent risk of a company's GenAI program is mitigated by a company's existing safeguards, processes, and procedures.
  • Most importantly from a risk tracking and measurement view, residual risk is the amount of risk or danger in a company's GenAI program after inherent risk has been mitigated by a company's existing safeguards.

From a risk assessment perspective, breaking down types of risks into categories and then articulating the ways those risks could materialize provides a way to identify potential risk events that require management. For example, instead of analyzing the risk associated with total company dissolution at the hands of a botched GenAI-augmented product rollout, one might instead study specific risk events involving data sourcing, advertising and marketing practices, or licensing that could cumulatively impact a company's products and services risk. Mapping existing and aspirational safeguards to specific risks will help everyone understand how those risks could be managed. Safeguards in a GenAI program setting may relate to content moderation, data privacy and security, usage monitoring and output testing, resourcing, vendor management, and user access rights, among other topics.

The union between inherent risk category scores and safeguard effectiveness scores will yield a residual risk score that can be reviewed in isolation, rolled up into a specific workstream, business unit, or geography and then compared against previous risk assessment results to help identify pockets of focus for immediate risk remediation or more sanguinely, areas of the company's GenAI risk management program that have improved over time. This also allows for most specific discussions with the board of directors including as it relates to categories of risk such as legal and regulatory environment risk, products and services risk, client and third-party risk, and an enterprise's consumer risk, among others.

After a rather lengthy hold time, the GenAI model was able to piece together several valuable considerations that should be top of mind for any risk centric GenAI program discussion involving a company's board members, risk management, and IT executives. The authors supplemented the questions that the GenAI model provided with our own thoughts, and the combined question list is available as an appendix. It can also serve as questions independent directors may ask of the company management team. 7

There is a reason that the GenAI model was unable to succinctly describe the exact risk assessment approach that's right for measuring a company's GenAI program, but adhering to the above principles in executing a risk assessment will at least ensure that your company is analyzing risk in a methodical and defensible manner, which also happens to align with the United States Department of Justice's Evaluation of Corporate Compliance Programs. 8

Appendix – Questions an Independent Director May Ask about Managing GenAI Risk

The below example questions were produced by a GenAI model and supplemented with human intervention. Thankfully, the model leveraged for this output was not trained in Chaucer's Middle English and instead, summarized the prompt in a vintage of the English language that required very little translating by the authors.

  • Was a formal GenAI risk assessment completed?
  • How are we ensuring that the safeguards currently in place are operating as designed? What more is there to do to mitigate risk?
  • What processes are we following to validate GenAI-generated results against established controls? What is the risk of hallucinations occurring?
  • How are we limiting system and user inputs to specific, well-defined pieces of information?
  • What mechanisms do we have in place to prevent unintended or malicious inputs that could impact GenAI outcomes?
  • How are we enhancing GenAI prompts by incorporating our corporate data?
  • What measures do we have in place to safeguard sensitive data used by GenAI?
  • How are we addressing privacy concerns related to data collection, storage, and access?
  • Have we encountered any data poisoning, data leakage, or data integrity attacks at any stage of our GenAI program development or deployment? What was done to rectify the issue and how do we continue to protect the integrity of our model(s)?

Footnotes

1 Massachusetts Institute of Technology, "Explained: Generative AI," November 9, 2023.

2 Federal Trade Commission Press Release, April 25, 2023 and United States Department of Justice Press Release, April 4, 2024.

3 White House Press Release, July 21, 2023, and White House Press Release, September 12, 2023.

4 White House Press Release, October 30, 2023.

5 United States Department of Justice, Office of Public Affairs Press Release, March 7, 2024.

6 Wall Street Journal, "The AI Industry Is Steaming Toward a Legal Iceberg," March 29, 2024.

7 See the appendix for a compendium of questions that independent directors might use to better understand their company's GenAI program risk mitigation efforts.

8 Evaluation of Corporate Compliance Programs – United States Department of Justice Criminal Division, March 2023.

Originally published by Harvard Law School Forum on Corporate Governance.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

The Board Member's Oversight Of AI Risk – Moving From Middle To Modern English

United States Corporate/Commercial Law
Contributor
AlixPartners is a results-driven global consulting firm that specializes in helping businesses successfully address their most complex and critical challenges.
See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More