ARTICLE
26 July 2023

Leading Technology Companies Agree To White House's AI Safeguards

JD
Jones Day

Contributor

Jones Day is a global law firm with more than 2,500 lawyers across five continents. The Firm is distinguished by a singular tradition of client service; the mutual commitment to, and the seamless collaboration of, a true partnership; formidable legal talent across multiple disciplines and jurisdictions; and shared professional values that focus on client needs.
Under the non-binding Voluntary AI Commitments on "Ensuring Safe, Secure, and Trustworthy AI," the companies pledged to adhere to a set of eight rules focused on ensuring that AI products are safe...
Worldwide Technology

Under the non-binding Voluntary AI Commitments on "Ensuring Safe, Secure, and Trustworthy AI," the companies pledged to adhere to a set of eight rules focused on ensuring that AI products are safe before introducing them to the public, building systems that put security first, and strengthening the public's trust in these products. Specifically, the companies committed to:

  1. Internal and external security testing of their AI systems before releasing them. This testing will be partly carried out by independent experts and is intended to guard against AI risks, such as biosecurity and cybersecurity.
  1. Sharing information across the industry with governments, civil society, and academia on managing the risks associated with AI, such as by identifying best practices.
  1. Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights, which are an essential part of AI systems. The companies agreed that model weights should be released only when intended and after security risks are evaluated.
  1. Facilitating third-party discovery and reporting of vulnerabilities in the companies' AI systems. This commitment is focused on establishing robust reporting mechanisms to promptly identify and correct issues.
  1. Developing robust technical mechanisms to ensure that users know when content is AI-generated, such as by employing a watermarking system. This is intended to help promote public trust in AI by reducing the risks of fraud and deception.
  1. Publicly reporting the capabilities, limitations, and areas of appropriate and inappropriate use in the companies' AI systems. This public report will identify both security and societal risks.
  1. Prioritizing research on the societal risks posed by AI systems, including the avoidance of harmful bias and discrimination, as well as protection of privacy.
  1. Developing and deploying advanced AI systems to help address society's greatest challenges, such as cancer prevention and climate change mitigation.

The announcement described the commitments as "intend[ed]... to remain in effect until regulations covering substantially the same issues come into force." The White House also announced that it is working on an executive order and pursuing bipartisan legislation to further regulate AI. Companies should closely monitor these developments as the Biden Administration has signaled that AI regulation is a key priority.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More