Generative Artificial Intelligence And Corporate Boards: Cautions And Considerations

MB
Mayer Brown

Contributor

Mayer Brown is a distinctively global law firm, uniquely positioned to advise the world’s leading companies and financial institutions on their most complex deals and disputes. We have deep experience in high-stakes litigation and complex transactions across industry sectors, including our signature strength, the global financial services industry.
Generative AI (i.e., AI creating original content using machine learning and neural networks) has captivated people everywhere, eliciting a range of responses; from doomsday warnings of machines rendering humans extinct...
United States Technology

Executive Summary

Generative AI (i.e., AI creating original content using machine learning and neural networks) has captivated people everywhere, eliciting a range of responses; from doomsday warnings of machines rendering humans extinct to rosy dreams where machines possess magical properties. In corporate boardrooms, however, a more sober conversation is occurring. It seeks a practical understanding of how boards might evaluate this powerful, but error-prone, new tool, and comes with both cautions about its downsides and considerations for potential upsides.

Companies are racing to harness the benefits of generative AI while trying to develop policies to protect against reputational and regulatory risks and that create a clearer role for boards of directors. The generative AI industry continues to debate and refine its offerings as well, which have become more effective with each subsequent iteration of generative AI. Policymakers are weighing in with a flurry of regulatory initiatives and recommendations in the face of concern about the ethical implications and other risks of widespread adoption of this new tool.

In this Legal Update, we offer corporate boards insight about generative AI along with practical cautions, noting both its perils and promise. We also touch on current regulatory initiatives and legal issues for directors.

Regulatory Initiatives in the United States and Across the Globe

Generative AI has been the subject of multiple regulatory and political initiatives worldwide, focused on potential risks in the use of AI and achieving a balance between innovation, accountability, and transparency. While there is not a comprehensive legal framework for the regulation and oversight of AI in the United States, legislative efforts around AI indicate an increasing drive for Washington to assume a significant position in the regulation of AI. For example, in the United States:

  • the White House issued a fact sheet outlining a series of executive actions addressing generative AI, including a blueprint for a generative AI "bill of rights," and its Office of Science and Technology issued a request for information on oversight of generative AI systems
  • the Department of Commerce's National Institute of Standards and Technology (NIST) released a framework for voluntary use and to promote trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems, including notably suggesting that companies "establish policies that define the artificial intelligence risk management roles and responsibilities . . . including board of directors . . ."
  • the Federal Trade Commission (FTC), the Justice Department's Civil Rights Division and the Equal Employment Opportunity Commission issued a joint statement focusing on generative AI's risks of bias
  • the FTC has also separately warned that certain generative AI usage could violate federal laws the FTC enforces
  • widely-publicized hearings on generative AI were recently held before the Senate Judiciary Committee and the House Judiciary Subcommittee on the Courts, Intellectual Property and the Internet and Sub Committee on Cybersecurity, Information Technology and Innovation, providing an opportunity to discuss trends, implications, and risks associated with AI and potential regulatory and oversight frameworks
  • state and local government initiatives are underway nationwide, including in California, Colorado, Illinois, Vermont, Washington, and New York City

Outside the United States, wide-ranging regulatory initiatives are being considered, including:

This intense global focus on the potential uses and misuses—and related responsibilities and obligations—point towards the need for corporate boards to be establishing policies and processes to address generative AI risk management while evaluating how generative AI may be properly used to gain strategic and competitive advantages.

Evolving Scope

Generative AI produces content based on natural language inputs, such as memos, queries, or prompts. Output varies in quality, accuracy, and objectivity.

The more widely-available popular generative AI tools tend to be designed for general audiences. At this point, many lack the technical specifications and precision that companies or professional groups will find desirable from the relevant databases and guardrails to depth of analysis, tone, or diction, and references to authority.

Some industries are likely to be touched by the technology in more obvious ways than others—publishers and software firms possibly more at the moment than building contractors or mining companies for instance. Oversight will correspondingly vary as will required training, supervision, and restrictions or permissible uses.

Many companies are developing policies and procedures specifically applicable to the use of generative AI by officers and employees. They are updating their corporate policies to address concerns about potential risks and harms in the context of generative AI, such as bias/discrimination, confidentiality, consumer protection, cybersecurity, data security, privacy, quality control, and trade secrets.

Director Duties and Recommended Precautions

Generative AI does not change the bedrock fiduciary duties of corporate directors and using or otherwise incorporating AI into board decision making is certainly no substitute for the traditional means of discharging them. For example, directors must, consistent with their duty of care, act in an informed manner, with requisite care, and in what they, in good faith, believe to be the best interests of the corporation and its shareholders. They must act loyally, including by protecting the confidentiality of corporate information.

If generative AI evolves into a tool that poses challenges to corporate policy or effectiveness or creates material risk, it is reasonable to assume that related oversight function would fall within the fiduciary duties of corporate boards. That would require the board to exercise good faith and act with reasonable care to attempt to assure that management maintains appropriate systems of control over generative AI.

For public companies using generative AI in financial reporting and securities filings, boards may need to confirm with management that the company appropriately uses generative AI's capabilities in connection with its internal control over financial reporting as well as disclosure controls and procedures.

As generative AI tools proliferate and are incorporated into search and data products already in wide use, directors should consider both (1) the degree to which information they receive from management, auditors, consultants, or others may have been produced using generative AI and (2) whether they can and should use generative AI tools as an opportunity to support their duties and activities as directors. For both purposes, directors must be mindful, like company officers and employees, of risks associated with the company's use and reliance on generative AI. Three of the key considerations are:

First, generative AI are machines, not people. They have no knowledge, expertise, experience, or qualifications—in any field whatsoever, not least corporate governance or business administration. Unlike directors, generative AI owes no fiduciary duties and faces no liability for breach.

Second, generative AI results may be inaccurate, incomplete, or biased (with bogus AI information or output commonly called "hallucinations"). Generative AI can be a valuable tool to generate ideas, provide generally-available factual information, spot issues, and create lists. But, at least at present, there are limits on these tools' capabilities. Accordingly, outputs must be scrutinized and tested for trustworthiness—that is, for things like accuracy, completeness, lack of bias, and explainablity (i.e., explain how and why AI made a particular recommendation, predication, or decision). Only then should the output be drawn upon to incorporate into the activity, discussion, or material of interest.

Third, generative AI processes and retains user interactions as training data, which is intended to improve the quality of its output in future versions, but also implicates privacy and cybersecurity risks and considerations, including the unintended disclosure of confidential information and other data. Corporate directors must therefore take care to avoid generative AI being used in ways that could compromise such confidentiality or create legal exposure.

For example, in the case of confidential or sensitive company information, it is possible that data or document input and output might leak and be incorporated into the wider generative AI model, exposing it to being machine read, trained by, or synthesized into the generative AI models. Accordingly, directors should consider some practical self-limitations, whether or not formalized in corporate policies. For example:

  • not mentioning the company name or other company specific or identifying information in inputs or chats with generative AI
  • not mentioning any non-public or proprietary information or specific individual names or data in inputs or chats with generative AI
  • reviewing generative AI output for accuracy and completeness and not simply passing on generative AI output without a thorough review and modifications as necessary
  • using generative AI output internally and not projecting publicly
  • when appropriate, identifying the generative AI output component of any product that involved the use of generative AI

Of course, this practical guidance for directors may evolve as market practices and company generative AI policies evolve.

For now, in the case of companies that have not done so, boards may want to ask management for a high-level initial report on generative AI and discuss the subject with management, preferably with there being a management point-person for AI oversight, usage, and risk management. The goal would be to assess the extent to which generative AI tools create opportunities—competitive, innovative, or strategic—and/or present risks—whether operationally disruptive, compliance, or financial.

To explore these possibilities, a board might begin by asking management to put the topic on an upcoming board meeting agenda and receive both management's views and perspectives from outside advisors. As part of the process, directors could learn about generative AI by posing a series of questions to generative AI asking about these issues, consistent with the foregoing common sense precautions, which may add to the framework for discussion.

In the case of US companies that have made significant—or "mission-critical"—investments in AI – boards should consider being able to demonstrate board-level oversight of AI risks. This is particularly important due to potential claims based on standards from the Caremark case, which involve directors' failure to oversee corporate compliance risks. While bringing Caremark standard cases has traditionally not been easy, the ability of some recent claims to survive motions to dismiss highlight the ongoing significance of this claim for directors responsible for overseeing critical company compliance operations. Therefore, even if a company is not in breach of its regulatory obligations, directors could still face legal claims if they were not sufficiently attentive to important "mission-critical" risks at the board level.

As such and without detracting from the suggestions above, for companies where AI is associated with mission-critical regulatory compliance/safety risk, boards might want to consider: (a) showing board-level responsibility for managing AI risk (whether at the level of the full board or existing or new committees), including AI matters being a regular board agenda item and shown as having been considered in board minutes, (b) the need for select board member AI expertise or training (using external consultants or advisors as appropriate), (c) a designated senior management person with primary AI oversight and risk responsibility, (d) relevant directors' familiarity with company-critical AI risks and availability/allocation of resources to address AI risk, (e) regular updates/reports to the board by management of significant AI incidents or investigations, and (f) proper systems to manage and monitor compliance/risk management, including formal and functioning policies and procedures (covering key areas like incident response, whistleblower process, and AI-vendor risk) and training.

Boards should use these management discussions and reports to help to determine the appropriate frequency and level of board engagement and oversight. This will range from board-only periodic reviews to more regular discussions, including involving one or more board committees.

Visit us at mayerbrown.com

Mayer Brown is a global services provider comprising associated legal practices that are separate entities, including Mayer Brown LLP (Illinois, USA), Mayer Brown International LLP (England & Wales), Mayer Brown (a Hong Kong partnership) and Tauil & Chequer Advogados (a Brazilian law partnership) and non-legal service providers, which provide consultancy services (collectively, the "Mayer Brown Practices"). The Mayer Brown Practices are established in various jurisdictions and may be a legal person or a partnership. PK Wong & Nair LLC ("PKWN") is the constituent Singapore law practice of our licensed joint law venture in Singapore, Mayer Brown PK Wong & Nair Pte. Ltd. Details of the individual Mayer Brown Practices and PKWN can be found in the Legal Notices section of our website. "Mayer Brown" and the Mayer Brown logo are the trademarks of Mayer Brown.

© Copyright 2023. The Mayer Brown Practices. All rights reserved.

This Mayer Brown article provides information and comments on legal issues and developments of interest. The foregoing is not a comprehensive treatment of the subject matter covered and is not intended to provide legal advice. Readers should seek specific legal advice before taking any action with respect to the matters discussed herein.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More