This post discusses the aspects of Jonathan Fisher KC's independent review of Disclosure and Fraud Offences that address the applications of advanced technology and AI in disclosure.
On 20 March 2025, the Home Office published the outcome of Part 1 of Jonathan Fisher KC's independent review of Disclosure and Fraud Offences in a report titled "Disclosure in the Digital Age" (the "Report") (please see our earlier briefing on the Report here). The Report includes 45 recommendations, designed to assist in the development of a modern disclosure regime that embraces technology in order to minimise needless administrative burdens on law enforcement agencies.
In this briefing, we focus on the sections of the Report that examine the role of advanced technology and artificial intelligence ("AI") in material management and disclosure, and the associated recommendations from the Report.
Applications of advanced technology and AI in disclosure
Document Review
Most law enforcement agencies use well-established material management and eDiscovery platforms to assist users with reviewing digital files. Standard functions include, for example, word searches and the application of date ranges to filter material. The Report notes that law enforcement agencies are capitalising on the recent development of AI powered tools to review investigative material more efficiently, particularly in the following ways:
- Material Prioritisation: used to search large data sets and produce a prioritised list of files, from most to least likely to be relevant e.g. OpenText Axcelerate, in use by the Serious Fraud Office ("SFO"). Following manual review of files using that priority order, the reviewing officer's decisions as to which files are relevant are fed back into the tool for continuous learning and increased accuracy in future prioritised lists. The Report describes issues identified by the SFO in the process of configuring OpenText Axcelerate for its use, meaning that searches have not always returned all expected results. The software has subsequently been reconfigured to address the issue and changes to the programme have been made to safeguard against this in the future. The Report therefore underlines the important of regular maintenance of AI tools and rigorous performance evaluation.
- Quality Assurance: in use alongside existing mechanisms for quality assurance, AI tools can be used to pick up on discrepancies and errors. For example, where an officer has decided a file is not relevant, but the tool suggests that it may be, the file is flagged for further manual review.
- Concept Groups: Machine learning functionality which can organise material into concept groups based on key themes (such as cash, money, fees etc). Each concept group has keywords that give an indication as to why those documents have been grouped together. The report does not offer detail as to the type of cases on which this may be used, though we note that concept groups can be used to help: (i) identify documents that are relevant to the case or contain privileged information; and (ii) can be used during early case assessment to quickly assess the scope and nature of the documents, guiding the review strategy.
Scheduling
Two approaches are currently being trialled by law enforcement agencies to streamline the scheduling process (the generation of schedules of unused material to be provided to the defence and the scheduling of sensitive material). In complex cases with a large volume of digital material, the scheduling process can be particularly burdensome.
- Metadata schedules – These schedules use standard non-AI software functions to generate a schedule that only includes the metadata (i.e. data that describes other data such as, document author(s), sent date, recipients etc. These are already being used in large cases (as noted in the HM Crown Prosecution Service Inspectorate's Report on SFO disclosure), and could bring vast efficiencies if implemented more widely in high volume cases.
- AI-generated written schedule – These schedules use generative AI to 'read' documents and produce a schedule of written descriptions of each item. Software of this nature has been piloted on cases by HMRC. However, the Report highlights potential issues of utilising AI-generated written schedules, including: (i) hallucinations; and (ii) challenges for an AI model to extract all salient pieces of information in the same way as a human with a wider knowledge of the case.
Redaction
There is ongoing work in (or funded by) the Home Office to develop tools that can perform redaction, for both textual and audiovisual material. Up to 80% time-efficiency savings are estimated, when compared to the tools currently in use.
Recommendations
The Report recommends the creation of a new Criminal Justice Digital Disclosure Working Group, with members from all relevant parties, including the judiciary, responsible for exploring off-the-shelf technological solutions. Due to the current high cost of such AI tools, a stringent cost-benefit analysis will be required. As the technology evolves, the Report envisages the combination of material management software with law enforcement investigative tools, thereby minimising the total number of separate digital tools required to carry out disclosure. Central procurement of tools will help ensure that tools employed will be of a similar standard across agencies as well as financial and infrastructure benefits of economies of scale.
As well as examining the accuracy of the tools when managing and identifying material, it is recommended that the working group also consider the security of the tools. The Report notes the concern that large language models ("LLMs"), which represent the most popular mainstream AI tools available today, often do not keep a user's 'input' private. In the process of utilising a public LLM to analyse data or information, this material will be ingested by the model and can be extracted/viewed by other users. Furthermore, many of the material management software tools on the market are cloud-based. The Report raises the concern that, without sufficient stress-testing and mitigations of these tools, law enforcement agencies will expose themselves to data breaches and data loss. These types of considerations will be familiar to commercial/private sector users, a number of whom will be using strategies to overcome such privacy concerns, including, private version LLM's offering enhanced data security measures such as zero-retention modes and robust encryption, and anonymisation of input data. How agencies will seek to apply these and other mitigations will remain to be seen.
In addition, as many AI tools are heavily dependent on the initial human input or 'training data' set, the Report notes that it is essential that the law enforcement officers operating such software have sufficient technical training in order to ensure that these tools are used correctly. It is equally important that human accountability for the decisions made by law enforcement officers is retained. One of the Report's recommendations is that a cross-agency protocol should be created, covering the ethical and appropriate use of artificial intelligence in the analysis and disclosure of investigative material. The protocol is intended to: (i) reduce the risk of disparate practices; (ii) ensure consistency across law enforcement agencies; and (iii) assist agencies as they seek to procure and utilise emerging AI technologies.
Commentary
The integration of AI and advanced technology in the criminal disclosure process has the potential to bring about a significant shift and considerably streamline the disclosure process.
The technology and AI use cases described in the Report will be very familiar to companies that have conducted or been involved in investigations in recent years. The use of document review software to prioritise the most relevant material for review is not yet, apparently, consistent across matters, and its use may depend upon the volume of documents to be reviewed and the company's/agency's resource constraints. It is noteworthy that the use of this tool described in the Report nonetheless still appears to envisage human review of every document, even those identified as much less likely to be material. Readers may be more familiar with taking this technology a step further and choosing to implement a cut-off point, below which the lowest priority documents will not be subject to human review, further driving efficiency. It remains to be seen whether the SFO and other investigating/prosecuting agencies will consider themselves able to adopt such an approach within the criminal disclosure regime.
There are many existing and emerging AI tools which can enhance the effectiveness of large-scale document review and production exercises, including for example tools which can generate chronologies and dramatis personae from large datasets or map complex corporate or payment structures. Whilst potentially less relevant to disclosure exercises, these are tools that will be used by defendants in the criminal process to assist defence teams in reviewing relevant material and building their cases. The Report rightly notes the risks inherent in the use of AI tools, and the need for proper training and understanding to ensure their appropriate use – how the proposed working group balances these risks with the potential advantages and the risk of a technically disadvantaged prosecution facing a defendant fully utilising these tools, will be a particular challenge.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.