AI Governance - Specific Takeaways For Companies Regarding The US Senate Judiciary Hearings On May 16, 2023

MB
Mayer Brown

Contributor

Mayer Brown is a distinctively global law firm, uniquely positioned to advise the world’s leading companies and financial institutions on their most complex deals and disputes. We have deep experience in high-stakes litigation and complex transactions across industry sectors, including our signature strength, the global financial services industry.
On Tuesday, May 16, 2023, the US Senate Judiciary Subcommittee on Privacy & Technology held its first hearing on Artificial Intelligence.1 The hearing, called "Oversight of AI: Rules for Artificial Intelligence"...
United States Technology

On Tuesday, May 16, 2023, the US Senate Judiciary Subcommittee on Privacy & Technology held its first hearing on Artificial Intelligence.1 The hearing, called "Oversight of AI: Rules for Artificial Intelligence", featured witness testimony from Sam Altman, the CEO of OpenAI, Christina Montgomery, the chief privacy officer at IBM, and Gary Marcus, a professor emeritus at New York University. The hearings provided an opportunity for business and industry leaders to address trends, implications and risks associated with artificial intelligence (AI) with a view to assessing the nature and scope of potential regulatory and oversight frameworks.

The hearings come in the context of growing legislative and industry concern about AI and discussion around how to best inform and protect the public given the proliferation and evolving shape of AI technology while recognizing the potential benefits and practical uses of AI. The hearings are also part of growing federal legislative efforts around AI that indicate an increasing drive for Washington to potentially assume a significant position in the regulation of AI, aiming to strike a balance between fostering innovation and ensuring accountability and transparency.

One key takeaway from the hearing involved the emerging consensus among the Senators on both sides about the risks of uncontrolled AI and the need for and general direction of future regulation of AI. There was a general recognition that AI technology was not "new," but the commercialization of generative AI (i.e., AI creating original content using machine learning and neural networks), represents a new milestone that warrants greater oversight and legislation. One core concern was that the recent releases of powerful generative AI models have increased, by orders of magnitude, the number of users of AI and, thus, the risk to both users and society. The Senators repeatedly stated that Congress' previous failure to timely regulate social media has caused great harm and that a similar failure with AI could cause still greater harm.

Testimony at the Senate Hearing

Key Witness Testimony:

The witness testimony is available online here. Some of the notable areas are discussed below:

The CEO of OpenAI called for greater regulation of AI (striking a different tone from tech executives who might ordinarily be expected to resist federal regulation), including creating a new safety and licensing agency, developing compliance and safety standards, and requiring independent audits of companies. He said: "OpenAI believes that regulation of AI is essential, and we're eager to help policymakers as they determine how to facilitate regulation that balances incentivizing safety while ensuring that people are able to access the technology's benefits. . . . We are actively engaging with policymakers around the world to help them understand our tools and discuss regulatory options. For example, we appreciate the work National Institute of Standards and Technology has done on its risk management framework, and are currently researching how to specifically apply it to the type of models we develop . . . [It] is vital that AI companies—especially those working on the most powerful models—adhere to an appropriate set of safety requirements, including internal and external testing prior to release and publication of evaluation results. To ensure this, the US government should consider a combination of licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities, alongside incentives for full compliance with these requirements."2

The IBM executive emphasized the need for targeted, incremental regulation based upon specific AI uses rather than blanket regulation of AI technology (which might be more consistent with some of the European approaches). While she argued for clearer regulatory guidance for AI developers, she did not think a separate AI agency needed to be created—rather, she focused on the existing agencies that have identified their intention to enforce. Finally, she emphasized that companies should act "now" and not wait for legislation to engage in the trustworthy AI practices that IBM has embraced, such as testing for accuracy, bias, transparency and explainability (i.e., explain why an AI system reached a particular decision, recommendation, or prediction). She acknowledged, however, that IBM models are primarily oriented towards business-to-business (B2B) applications rather than consumer-facing ones and advocated for the implementation of a "reasonable care" standard to establish accountability for AI systems. She reported on IBM's internal governance framework, which includes a designated AI officer, an ethics board, impact assessments, disclosure of data sources, and user notification when engaging with AI.

The NY professor focused on the necessity for the scientific community to be engaged in developing standards for testing of AI and proposed an increase in funding for research focused on ensuring the safety of AI, both in the immediate future and in the long run. He also called for a global approach and pointed out some of the harms that can occur, such as—in extreme cases—AI encouraging users to commit suicide and cybersecurity risks, if AI regulation does not occur. He made the case that the current court system may not be equipped to effectively regulate AI technology and stressed the importance of robust oversight by a governing agency.

Observations from Senate Questioning:

There were interesting comments made by a number of Senators and the witnesses on how regulation and oversight of AI should proceed. Senator Durbin suggested that there should be a cabinet-level position for AI, while Senator Graham asked for guidance on what an AI-focused agency (if one is created) should undertake and have responsibility for. Senator Graham also suggested that there should be a "licensing requirement" for use of AI, at least under certain circumstances, and that the license could be revoked if standards are not met. Going one step further than the Senators, the NYU professor advocated for the creation of a safety-review board, such as the FDA, including the ability to "recall" AI products.

There also appeared to be consensus around the need for "testing" of AI, and general concern from Senator Coons, Senator Booker, and Senator Hawley around impersonation and manipulation as it relates to voting.

Senator Coons also talked about the development of an AI "ethics board" to determine whether AI is undermining democratic institutions or faith in democratic systems and values.

The NYU professor called for the development of an international body similar to United Nations Educational, Scientific, and Cultural Organization (commonly referred to as UNESCO) or Conseil Européen pour la Recherche Nucléaire (translated as "European Counsel for Nuclear Research" and referred to as CERN) to collaborate and coordinate global governmental policy with regard to AI.

As an aside, it is worth noting that several days following the hearings, on May 19, 2023, the leaders at the Group of Seven (G7) nations agreed to establish a "Hiroshima Process" to evaluate and regulate AI's impact on society, complete with cabinet-level discussions and presentation of results by year's end.3 Like the Senators at the AI hearing days before, the G7 leaders expressed concern about potential harmful impacts in the absence of coordinated regulation, given the speed of AI development.

Aside from discussion of how oversight of AI should be managed, the Senate hearing also focused on specific risks and concerns associated with the use of AI. For example, Senator Blackburn questioned whether intellectual property (IP) owners would be receiving royalties or be paid for the use of their data in training models. While the OpenAI CEO indicated that a framework could be developed to compensate IP owners, it was noted this may not be agreed to by other developers and users of AI. Senator Padilla focused on fairness, harms and equitable treatment of diverse groups. He also raised questions concerning AI language fairness.

Senator Welch focused on the harms that are to be addressed, including related to privacy, bias, IP and misinformation. Senators Blumenthal and Booker raised antitrust risks and the concentration of capabilities in a relatively small number of companies to develop AI.

Senator Blumenthal also focused on transparency, accountability and being careful as it relates to limits on legal liability exposure that might be afforded to businesses. Senator Hawley asked whether companies should be sued, and whether it would be easier to just create a private right of action for AI issues so that lawsuits could be brought. The IBM executive observed that the law does not distinguish between persons violating laws with and without tools, and that use of AI is not a shield under any law.

In addition to discussing the appropriateness of establishing a comprehensive national privacy law that would safeguard essential data protections for AI, the hearing also touched on the potential regulation of social media platforms that benefit from exemptions under Section 230 of the Communications Decency Act of 1996, including in the context of protecting children. Senator Durbin questioned the OpenAI CEO about his prior statement (during a Kara Swisher podcast) that Section 230 does not apply to generative AI. The OpenAI CEO acknowledged that while he made that statement, he does believe some legal framework, similar to Section 230, will need to be developed to protect AI companies. Senator Graham came back to this point in his questioning to confirm whether the OpenAI CEO was indeed stating that Section 230 does not apply to generative AI. The OpenAI CEO said that he was not claiming 230 protection. Notably, as CEO of a non-profit, the CEO of OpenAI did not purport to represent all commercial developers and users of AI. It is worth noting that the position that Section 230 does not apply to generative AI is likely to be strongly contested by the commercial AI business community. One thing that should be anticipated is that the issue of whether or not Section 230 applies to AI development will be the subject of litigation for many years to come. While the exchanges regarding Section 230 were relatively incidental, especially in light of the full hearing that will be devoted to IP rights in June/July 2023, Section 230 may become particularly significant in terms of the potential for litigation risk. It also raises questions about how larger tech companies that are using large language models can apply the same Section 230 framework as they have used successfully in past litigation – lower court decisions that the US Supreme Court did not disturb in its May 18, 2023 decision declining to address the scope of Section 230.4 Companies should plan to keep apprised of the future subcommittee hearings (referenced at the end of this alert) to hear a more robust discussion of this issue.

To recap, broad areas with at least some bipartisan support included:

1. Regulatory Oversight

  • An independent agency/commission to oversee AI.5
    • An AI Ethics Board to ensure AI encourages faith in democratic values, similar to the way China has "embedded its values as part of its recent AI measures."6

2. AI Testing, Regulatory Licenses, and Audits

  • Pre-deployment testing.7
  • Post-deployment audits, monitoring and testing of AI for accuracy, children's safety, cyber-resilience, among other areas.8
  • Licensing of AI, and revocation of licenses where AI fails post-deployment testing and monitoring thresholds.9

3. Company Risk Assessments and Mitigation

  • Risk assessments to determine areas for risk mitigation.10
  • Focus areas for pre- and post-deployment risk assessments to include the following, among others:11
    • Equitable treatment of diverse groups/avoiding bias12
    • Privacy risks if training data includes personal information13
    • Misinformation/hallucination/inaccurate AI14
    • IP owner rights when protected data is used to train models15
    • Potential use of AI to impersonate voice, likeness16
    • Cybersecurity
    • National security17
  • Ability of AI to shape public opinion and/or influence elections/voting.18

4. Transparency to Citizens When Interacting With AI and Explainability of Models

  • Self-disclosed "nutrition labels" that would explain, based on risk assessments, how (or how not) to rely on AI in certain contexts.19
  • Notice to individuals when they are interacting with an AI.20
  • Need for trust and transparency pertaining to AI.21

5. International Leadership

  • Global coordination and leadership by the United States to avoid circumstances where US companies are inhibited by regulation in a manner greater than non-US competitors and non-US actors can persist with unapproved practices.22

The Senators expressed a desire to regulate generative AI now and not be "too late" to provide guidance.23

Takeaways and Practical Considerations

Like the EU and many other places around the world, the United States is zeroing in on regulation for generative AI. The suggestions for business are to take heed of the over 1000+ pages of draft and existing legislation around the world, including in states in the United States that have already published AI-specific laws and guidance.24

Based on the suggestions and themes that arose during the hearing, companies could consider the following steps as part of demonstrating the existence of a "trusted" AI program reflecting best practices and the direction of anticipated regulatory developments:

  • First, create AI leadership and document an AI governance program. As part of that program, understand whether and how your service providers are using AI and how that might impact your compliance burdens and IP rights.
  • Second, prior to deploying AI, understand the use cases for the business.
  • Third, inventory the training data and segregate it so you can document what data the AI is trained on.
  • Fourth, consider whether synthetic data could be used in lieu of personal information.
  • Fifth, consider collaborating with IP owners for use of training data.
  • Sixth, consider "watermarking" IP and other protected information, such as personal information, in case opt-outs are received.
  • Seventh, conduct a risk assessment and determine whether the AI presents any risks of harm (such as inaccuracies or bias that would impact credit, health, children or otherwise adversely impact disadvantaged/vulnerable groups). If so, the company could consider baking in risk mitigation steps ahead of deployment of the AI.
  • Eighth, even after risk mitigation has been achieved, consider whether a "nutrition label" warning of possible harms makes sense. A statement that provides guidance on how the AI should (and not should not) be relied upon may insulate the company from future claims that users were not aware of potential risks, such as hallucination or bias when they knowingly used the AI.
  • Ninth, conduct bias assessments to determine if protected groups are experiencing greater risks unfair/disparate treatment, especially in the areas of credit lending, housing, employment, education and healthcare.
  • Tenth, post-deployment of the AI, companies might want to continuously monitor the AI to see how it is behaving, and to determine whether risks have been introduced in the algorithms during the iterative process of use of the AI.
  • Eleventh, consider including notice to individuals when they are interacting with AI. Notice requirements already exist under California's chatbot disclosure law that went into effect back in 2019. However, notice and transparency are hallmarks of recommendations that came out during the hearing, as well as the various frameworks introduced around the world, including the draft Artificial Intelligence Act in Europe.

It is worth noting that even without specific regulatory obligations, US companies that have made significant—or "mission-critical"—investments in AI should consider board-level oversight of AI risks. This is particularly important due to potential claims similar to the Caremark case, which involve directors' failure to oversee corporate compliance risks.25 While bringing Caremark standard cases has traditionally not been easy, recent instances where such claims have survived motions to dismiss highlight the ongoing significance of this claim for directors responsible for overseeing critical company compliance operations. Therefore, even if a company fulfills its regulatory obligations, directors can still face legal claims if they were not sufficiently attentive to important "mission-critical" risks at the board level.

As such and without detracting from the 11-step suggestions above, for companies where AI is associated with mission-critical regulatory compliance/safety risk, boards might want to consider: (a) showing board-level responsibility for managing AI risk (whether at the level of the full board or existing or new committees), including AI matters being a regular board agenda item and shown as having been duly considered in board minutes, (b) need for select board member AI expertise or training and/or designated management person with primary AI risk responsibility, (c) relevant directors' familiarity with company-critical AI risks and availability/allocation of resources to address AI risk, (d) regular updates/reports to the board by management of significant AI incidents or investigations, and (d) proper systems to manage and monitor compliance/risk management, including formal and functioning policies and procedures (covering key areas like incident response, whistleblower process and AI-vendor risk) and training.

Conclusion

It is anticipated that companies will spend over $154 billion on AI this year globally.26 At the same time, companies must contend with growing regulatory and reputational risks due to concerns about facial recognition, credit algorithms, hiring and counterparty selection tools, and other AI systems, particularly around the subject of bias.

The potential mechanisms for regulation of AI that were discussed during the Senate hearing are synergistic with frameworks that have been contemplated outside of the United States. Companies should proactively take stock of these global trends and develop better governance processes now as they build systems, rather than wait for these laws to go into effect. It may be difficult—and expensive—to play "catch-up" and retroactively document training data and risk mitigation techniques for AI tools once regulation and legislation is in place.

Companies should be aware that the Senators repeatedly emphasized the need to avoid repeating the perceived "mistake" of not regulating social media. They expressed a desire to take a different approach—by adopting early regulation—for AI.

More hearings and legislation will follow, including with respect to competition/antitrust concerns, intellectual property, national security and the possible creation of a new AI agency.

Footnotes

1. Dominique Shelton Leipzig, cybersecurity and data privacy partner at Mayer Brown, attended the Senate Judiciary hearing in person in Washington, DC on May 16, 2023.

2. OpenAI CEO's testimony at pages 11-12.

3. G-7 Leaders Agree to Set Up 'Hiroshima Process' to Govern AI (May 20, 2023).

4. See e.g., The United States Supreme Court's recent decision in Gonzalez v. Google (May 18, 2023) declining to narrow the scope of Section 230.

5. Senator Welch said he thought an independent agency/commission was "essential" and indicated that he would be reintroducing the Digital Commission Act in 2023 (first introduced in 2022). Senator Graham and Senator Coons suggested an independent agency was warranted. Senator Blumenthal thought any agency would need to be well-funded to support enforcement. Senator Hawley seemed to push for a private right of action to allow individuals to sue for AI violations.

6. See video of Senators Coons' testimony, recorded at the May 16, 2023, hearing video at 1:48:24-1:49:05.

7. Senator Graham.

8. The OpenAI CEO's testimony (at pages 5-6, and 8) was well-received by the Senators who often nodded approvingly.

9. Senator Graham.

10. Senator Kennedy.

11. Senator Welch in questioning the OpenAI CEO.

12. Senator Padilla, Senator Booker, Senator Welch.

13. Senator Hawley, Senator Welch and Senator Blackburn.

14. Senator Coons, Senator Welch.

15. Senator Hawley, Senator Blackburn, Senator Welch.

16. Senator Blackburn, Senator Coons.

17. Senator Blumenthal (closing the hearing by referencing future hearings regarding AI and National Security), Senator Graham (military application of generative AI needs to be explored), Senator Padilla (referencing DHS hearings on generative AI and impact on homeland security).

18. Senator Hawley, Senator Klobuchar (referencing Real Political Ads Act legislation that would require disclosure if ads were generated by AI); and Senator Coons.

19. Senator Blumenthal.

20. Senator Hawley.

21. IBM Chief Privacy & Trust Officer, Senator Blumenthal, Senator Hawley, Senator Klobuchar (referencing her and Senator Coon's proposed legislation titled Platform Accountability and Transparency Act).

22. Senator Durbin, Senator Graham, Senator Coons.

23. Senator Durbin, Senator Graham.

24. See e.g., (1) California's BOT Act; (2) Colorado's (CO SB 113) law establishing a task force to consider whether AI should be studied; (3) Illinois' (IL HB 53) law which provides that employers that rely solely upon artificial intelligence to determine whether an applicant will qualify for an in-person interview must gather and report certain demographic information to the Department of Commerce and Economic Opportunity; (4) Vermont's VT H.B. 410, which creates an Artificial Intelligence Commission to support the ethical use and development of artificial intelligence in the State, relates to the use and oversight of artificial intelligence in State government; (5) Washington state's law (WA SB 5092) appropriating budget for the chief information office to convene a working group on how best to review automated decision making systems before they are deployed and periodic audits; and (6) Section 5-301 of the NY City municipal law prohibiting the use of AI for employment decisions without providing notice and conducting a bias audit. See also the European Parliament's Internal Market Committee and the Civil Liberties Committee's May 11, 2023 amendments to the European Council's December 2022 Draft EU AI Act, which were not made public until May 16, 2023 (the same day as the Senate Judiciary Committee hearing). See also, the French Data Protection Authority's "AI Action Plan" released on May 16, 2023, the same day as the Senate Judiciary Subcommittee Hearings in the United States.

25. In re Caremark International Inc. Derivative Litigation, 698 A.2d 959 (Del. Ch. 1996).

26. Worldwide Spending on AI-Centric Systems Forecast to Reach $154 Billion in 2023, According to IDC (March 7, 2023).

Visit us at mayerbrown.com

Mayer Brown is a global services provider comprising associated legal practices that are separate entities, including Mayer Brown LLP (Illinois, USA), Mayer Brown International LLP (England & Wales), Mayer Brown (a Hong Kong partnership) and Tauil & Chequer Advogados (a Brazilian law partnership) and non-legal service providers, which provide consultancy services (collectively, the "Mayer Brown Practices"). The Mayer Brown Practices are established in various jurisdictions and may be a legal person or a partnership. PK Wong & Nair LLC ("PKWN") is the constituent Singapore law practice of our licensed joint law venture in Singapore, Mayer Brown PK Wong & Nair Pte. Ltd. Details of the individual Mayer Brown Practices and PKWN can be found in the Legal Notices section of our website. "Mayer Brown" and the Mayer Brown logo are the trademarks of Mayer Brown.

© Copyright 2023. The Mayer Brown Practices. All rights reserved.

This Mayer Brown article provides information and comments on legal issues and developments of interest. The foregoing is not a comprehensive treatment of the subject matter covered and is not intended to provide legal advice. Readers should seek specific legal advice before taking any action with respect to the matters discussed herein.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More