Judicial Conference Advisory Committee Releases Report On Evidence Rules For AI

Greenberg Traurig, LLP


Greenberg Traurig, LLP has more than 2750 attorneys in 47 locations in the United States, Europe and the Middle East, Latin America, and Asia. The firm is a 2022 BTI “Highly Recommended Law Firm” for superior client service and is consistently among the top firms on the Am Law Global 100 and NLJ 500. Greenberg Traurig is Mansfield Rule 6.0 Certified Plus by The Diversity Lab. The firm is recognized for powering its U.S. offices with 100% renewable energy as certified by the Center for Resource Solutions Green-e® Energy program and is a member of the U.S. EPA’s Green Power Partnership Program. The firm is known for its philanthropic giving, innovation, diversity, and pro bono. Web: www.gtlaw.com.
On May 17, 2024, the Judicial Conference's Advisory Committee on Evidence Rules released its report on artificial intelligence, which discusses the potential need for modifications to the Federal Rules of Evidence.
United States Technology
To print this article, all you need is to be registered or login on Mondaq.com.

On May 17, 2024, the Judicial Conference's Advisory Committee on Evidence Rules released its report on artificial intelligence, which discusses the potential need for modifications to the Federal Rules of Evidence. The committee issued this report after its April 2024 meeting, which featured testimony from several AI experts. While the committee did not identify any new or amended evidentiary rules addressing AI, it noted several areas it will continue monitoring.

During the committee's April 19, 2024, meeting, eight different experts presented on AI and machine learning, including computer scientists from the National Institute of Standards and Technology, leaders in AI regulation, and law professors.

The resulting report included four takeaways based on the experts' presentations and the committee's consideration of the testimony.

  1. Whether to amend the Federal Rules of Evidence to ensure machine learning output is reliable when not accompanied by an expert witness. The committee agreed that such a rule was worth considering, recognizing that reliability is a more concerning issue than authenticity with AI output. The committee did not consider or propose any specific language, but noted a potential solution could be a new rule applying Rule 702 reliability standards to the output. The committee also recognized the challenges in drafting such a rule.
  2. Whether it is necessary for a special rule to authenticate items in the age of "deepfakes."1 The committee did not recommend creating a special rule for authenticating items, but noted that traditional means of authentication may need modifications because of the difficulty in detecting a deepfake. Proponents of a new rule claimed that under current Rule 901(a), there is a low standard for what a party needs to provide to prove authenticity. The committee declined to recommend a new or modified rule at this time because courts have extensive years of experience in dealing with forged evidence.
  3. Whether a new rule is necessary to address claims an item is a deepfake. The committee noted that a party must make some initial showing that an item is a deepfake before an inquiry into the authenticity of that item. The committee again pointed to forgeries and noted that courts currently require some sort of showing before inquiring into whether digital or social media evidence has been hacked. The committee declined one of the expert's proposals for a new procedural rule addressing the burden of proof in moving forward with an inquiry on whether an item is a deepfake. The committee recognized that in its present form, the proposed rule was too high of an initial burden, but remains open to considering a modified rule.
  4. Whether validation studies are necessary to introduce machine learning evidence so that courts and litigants would not need to analyze the underlying AI source codes and algorithms. The committee found that additional thought was needed as to how courts would conduct and review such validation studies.

The committee concluded its report by noting it would continue to consider whether new rules or amendments are necessary to deal with AI and machine learning evidence. As the committee recognized, rulemaking is a multiyear process, which can be a lifetime in a rapidly developing area of technology like AI.


1. A deepfake is an AI-created realistic, but fake, photograph, audio, or video.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More