The rules of evidence have long served as the evidentiary gatekeeper for courtrooms, ensuring juries and courts see reliable information before making decisions. But these rules weren't designed for the challenges evidence generated by artificial intelligence present.
A proposed amendment to the Federal Rules of Evidence, Rule 707, could change that.
In June 2025, a committee of the US Judicial Conference voted to publish Rule 707 for public comment—a critical step toward its potential adoption.
At its core, the proposed Rule 707 is designed to ensure that AI-generated evidence is trustworthy. It focuses on a specific scenario: when a party uses AI to perform the work traditionally done by a human expert witness, but presents the findings without one.
Think of an expert who analyzes stock market data to prove a company committed fraud, or one who compares software code to detect copyright infringement. Under the new rule, if a party uses an AI model or tool to perform that same analysis, the AI-generated evidence must meet the same rigorous standards for reliability that a human expert would under the existing Rule 702.
This means the party introducing the AI evidence can't simply present the results. They must be prepared to prove:
- The AI model or tool's underlying data and methods are sound.
- The technology isn't based on biased or incomplete information.
- The AI model or tool's conclusions are accurate and was validated.
The goal is to prevent parties from using an AI "black box" to generate favorable evidence without having to explain how it was produced. You can cross-examine a human expert on their methods and potential biases; you can't cross-examine an algorithm. Rule 707 attempts to solve this by forcing the user of the AI to "open the hood" of its technology and demonstrate its reliability.
The proposal isn't without its critics. The US Department of Justice, in a lone dissenting vote at a May 2025 committee meeting, argued that the existing rules for expert testimony are already sufficient to handle AI-generated evidence.
Other critics contend the proposed rule is too narrow. It only applies when AI evidence is offered without a human expert. They argue that the risks of an AI model or tool's hidden biases persist even when a human expert presents the findings, a scenario the current draft of Rule 707 doesn't address.
Despite these objections, if Rule 707 is adopted, it will signal a major shift in how AI-generated evidence is treated. Lawyers and their clients will need to be ready to scrutinize an AI model or tool's fundamental design. This will likely lead to high-stakes legal fights over access to proprietary source code and training data, pitting the need for courtroom transparency against corporate secrecy.
Other potential amendments to the Federal Rules of Evidence are under consideration as well, including a rule to address AI deepfakes, but are still at an earlier stage and haven't yet been voted on by the committee. The era of AI in the courtroom has begun, and the legal system is racing to write the rules.
Rule 707 hasn't been formally posted, so the public comment period hasn't started. But once it's published, those seeking to provide comments can do so through the US Courts' Rulemaking process, as explained on the website.
Originally published by Bloomberg Law.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.