ARTICLE
29 October 2025

Arbitration and AI: From Data Processing to Deepfakes. Outlining the Potential–and Pitfalls–of AI in Arbitration

KG
K&L Gates LLP

Contributor

At K&L Gates, we foster an inclusive and collaborative environment across our fully integrated global platform that enables us to diligently combine the knowledge and expertise of our lawyers and policy professionals to create teams that provide exceptional client solutions. With offices spanning across five continents, we represent leading global corporations in every major industry, capital markets participants, and ambitious middle-market and emerging growth companies. Our lawyers also serve public sector entities, educational institutions, philanthropic organizations, and individuals. We are leaders in legal issues related to industries critical to the economies of both the developed and developing worlds—including technology, manufacturing, financial services, health care, energy, and more.
For all forms of dispute resolution, it is a case of "adapt or die". Conventional domestic construction arbitration in the United Kingdom has all but vanished, with most construction disputes now resolved in adjudication.
United Kingdom Technology
K&L Gates LLP are most popular:
  • within Immigration and Transport topic(s)

The following article was produced for and first presented at the 11th International Society of Construction Law Conference, 22-24 October 2025.

Abstract

For all forms of dispute resolution, it is a case of "adapt or die". Conventional domestic construction arbitration in the United Kingdom has all but vanished, with most construction disputes now resolved in adjudication. Over the course of the next ten years, global projects will contend with increased competition for resources against the backdrop of growing populations and escalating pressures of climate change, while companies and their lawyers grapple with political change and the opportunities (and risks) that artificial intelligence (AI) will bring. Whether or not you are a fan of international arbitration in its current format, it will inevitably change in the next decade. Our panel will therefore explore how arbitration can adapt and stay relevant for its users, against the backdrop of the social, political and technological changes and challenges that it will face between now and 2035. In particular, we will look at how arbitration might harness AI to enhance, economise and expedite proceedings while avoiding the generation of fictional data and deepfakes.

Introduction

AI is the topic of the moment, and rightly so. For many, the authors included, AI represents the fourth industrial revolution.1 If it has not started to do so already, AI will soon disrupt and change global economies and societies in a profound way. It is shaping energy policy,2 it is playing a role in warfare3 and it is already appearing in a courtroom near you.4 Arbitral tribunals will need to get comfortable, and quickly, with using AI in arbitration–harnessing its strengths while avoiding its pitfalls.

Huge advances in the capabilities of large language models (LLMs) are accelerating the pace of change within the AI industry. No one is immune to its impact. Even lawyers, often reticent to change, are scrambling to get to grips with the library of platforms that are now marketed as game-changers in our work. As of September 2024, LexisNexis reported that more than 80% of lawyers use or plan to use AI in their work,5 a figure set only to increase. Lawyers now need to invest in the right tech stack and must learn how to deploy it effectively. Firms which resist this change will find themselves losing out to their more innovative competitors.6

Nowhere is this pressure more obvious than in the world of disputes and in particular international arbitration, where rising fees provoke concerns for clients across the globe. The 2024 GAR-LCIA roundtable7 discussed at length the notion that international arbitration had "lost its way", with spiralling costs, delays and lengthy submissions being criticised. While the complexity of disputes and the volume of information required to decide them appears to be increasing,8 the search for procedural and cost efficiency requires parties and their counsel to seek solutions which achieve better results in a more proportionate way. AI will surely help to achieve that. The potential competitive rewards for those that push themselves to the cutting edge could be significant.9

It is not only lawyers who will need to contend with the advent of AI in arbitration, but also legislators and arbitrators. Only five weeks before this conference, and our discussion of this topic, the International Centre for Dispute Resolution (ICDR) announced its launch of an AI-based arbitrator for documents-only construction cases.10 How quickly parties and legislators adopt AI arbitration remains unclear, not least because doubts must exist as to the enforceability of AI-written awards, given legislation in several jurisdictions which expressly11 or impliedly,12 13 requires an arbitrator to be a person, i.e., a human.

There are huge opportunities with AI; there is a lot it can help lawyers do better. As much as practitioners need to invest in the right tools, however, they must also invest in the people that will be using them, encouraging them to incorporate AI into their practices in a way that is not only appropriate, efficient and innovative, but that is also ethical and meets the high standards required by the legal profession. Any lawyer utilising AI must be conscious of the quality of both the input and the output, as well as the limitations of the platforms. The risks in AI can be enormous, whether the cause be negligent or malicious.

Here we discuss both the opportunities and risks for the international arbitration community as it embraces AI. The message is clear: while technology may revolutionise how we conduct disputes, the revolution only works if the people using the tools know what they are doing.

Opportunity

LLMs are ideal tools for the complex tasks required of them by international arbitration:

  • First, the training of the model gives it an incredibly powerful frame of reference to draw from when answering queries. When the generic training data is paired with legal-specific data, the resulting products can be extremely valuable.14
  • The ability to ingest and analyse large amounts of data quickly makes AI tools an incredibly powerful means of increasing efficiency, doing what would take a human reviewer hours or more in a matter of minutes in a more predictable way.15

AI has the potential to revolutionise all parts of the international arbitration life cycle and the work of practitioners, experts and tribunals alike. It will allow participants to complete tasks at all stages of a case with greater efficiency and accuracy, finding added value whilst also reducing the cost of individual actions. Many of the issues that have been identified by the arbitration community can be in part addressed by AI. The following are a few of the areas where AI can support international arbitration.

Predictive analytics

When a dispute is contemplated, a client may wish to consider its position and its prospects of succeeding if the matter were to proceed to arbitration. Using AI, key documents can be analysed for an assessment of the likely strengths and weaknesses of various case strategies. Where parties use a legally trained AI database which has been trained to understand the concept of legal precedent, the ability to use AI to stress-test legal arguments may be an invaluable tool in helping a party to decide whether it is worthwhile pursuing a matter to arbitration or whether it is better to seek a commercial solution through negotiation or alternative dispute resolution.

This model of predictive analytics has been embraced by the International Chamber of Commerce (ICC) and the Permanent Court of Arbitration as being of assistance to parties in coming up with the most effective legal strategies.16 17 18 In construction disputes, the American Society of Civil Engineers in 2023 tested the ability of AI to analyse and predict the outcomes of disputes that had already been decided in adjudication and concluded that the AI predicted the real result with 95% accuracy.19 Such a use for AI can help clients who wish to consider their position before approaching external counsel; however, there is a limit to the reliability of the (albeit evidence-based) output of AI given constraints in precedent ingestion and the unpredictability of opposing counsel's arguments. The authors therefore doubt whether AI can ever truly substitute the judgment and experience of expert counsel when evaluating the likelihood of success of a case.

Arbitrator selection is a natural extension of this use case for AI in arbitration. AI solutions may provide parties with the ability to research in depth the candidates for appointment in their disputes. AI may be able to collate information on an arbitrator's previous awards or decisions—if those can be fed into a database—and it will also be able to search online for any public comments or publications that an arbitrator may have made on a particular issue. Such intelligence will allow parties to consider both their likely chance of success with an individual candidate. It will help parties to identify the arguments that might be likely to hold sway with a particular arbitrator and the likelihood of achieving a successful damages award based on the available facts and information.20 21 22 23 24

The key limitation of AI-assisted predictive analytics is the volume and quality of the data that it is using to make its predictions. One obvious restriction is the limit on the number of awards that an LLM might be able to review for the purposes of populating its database as to the decision-making of an arbitrator. An attraction of commercial arbitration is its privacy and confidentiality, with many awards not being published. Of major institutions, only ICSID publishes full awards, with others including the ICC, ICDR, LCIA and SIAC publishing only limited, redacted or summarised awards.25 The disclosure of information regarding awards, including the names of arbitrators, experts and counsel, raises questions of data protection and confidentiality which may limit attempts to broaden the spectrum of disclosures.

As such, the utility of predictive analytics may be limited in a world where arbitration maintains its privacy. Further, the available dataset for AI is likely to be restricted largely to written material, thereby omitting a potential wealth of oral and nonverbal information about an arbitrator–and, in particular, about what that arbitrator finds persuasive. According to the well-known behavioural scientist Professor Albert Mehrabian, face-to-face communication is made up of three main elements: nonverbal behaviour, tone of voice, and words. According to Professor Mehrabian,26 words, body language and tone of voice account for 7%, 55% and 38% of effective communication, respectively.

Within that context, if AI is only able to review 7% of the available dataset around a person's communications, we must conclude that AI is not seeing the whole picture. The authors therefore do not consider that AI can supplant the experience of counsel who have sat face-to-face with an arbitrator and watched them listen to and evaluate evidence and submissions in a hearing room. AI is a tool to help human evaluation, not replace it.

Document review

Collecting and reviewing documents can be an expensive and time-consuming exercise, particularly in complex scientific, technology or construction disputes, where data volumes can be vast. Although technology-assisted review has been in use in e-discovery for many years, AI-enabled discovery tools are a hot topic in the legal world as vendors release their solutions to the market.

An obvious question arises as to whether AI can replace first-level human reviewers, allowing arbitration teams to focus on the substance of a dispute rather than the binary decision-making of whether a document is "relevant" or not. However, vendors offering e-discovery solutions are still learning about the true utility of AI tools. In circumstances where e-discovery and AI review tools remain largely untested by the majority of practitioners, it may be difficult to justify to many clients–even those in the construction sector–the costs associated with licensing and deploying an AI review tool in circumstances where lawyers will still need to review the output in any event. This is because arbitration rules, evidential rules and ethical rules regulating legal practitioners have not yet evolved to the point where a human is no longer required to attest to the nature of a documentary search undertaken.

The investment required in up-skilling teams to be capable of effective "prompt engineering" required to use AI-enabled e-discovery can appear a difficult decision to justify from a time and cost perspective. Whether clients are willing to pay for such up-skilling and offer up their documents to allow teams to be trained is unclear. Nevertheless, prompt engineering can be refined and improved by allowing the uploading of small test-batches of documents to an AI database so as to allow e-discovery systems to be refined. Doubts also remain as to whether AI tools currently available are capable of handling matters with a large number of issues in dispute. For example, if there are numerous claims for variations within a construction dispute, it may be that the number of issues exceeds the capabilities of the platform–with the result that old-fashioned keyword searches may become necessary.

Within construction disputes, however, expert and professional advisory firms have been developing use-cases for these AI-assisted e-discovery tools, which are capable of ingesting large amounts of data. This has been used to develop better ways to handle the complex and data-heavy claims often seen in construction projects, including using AI in the collation of data around delay and disruption,27 with the goal of reducing the time and cost of document review. Moreover, these seek to use the plain-language, context-based approach of LLMs to search on a more holistic basis for evidence that relates to these claims rather than blunt keyword searches which may miss a particular nuance in the document set. For example, just searching for the word "delay" is not going to pick up an email chain where parties discuss needing "an extra day". The way AI tools review data means that an AI system is more likely to pick up both types of documents when flagging for relevance.

Like lawyers, expert witnesses will be equally susceptible to the pressure to innovate in order to maintain relevance and competitiveness. The experts' facility with and ability to use AI may become a key consideration for law firms who are looking to appoint experts on disputes. Those that are willing to embrace AI in their analysis will naturally rise to the top.

Removing the human aspect of any first-level document review is not without its drawbacks. The first-level review in any e-discovery exercise–even one which has used a certain level of "machine learning" within a document-hosting and review engine–has been an area where junior lawyers within dispute resolution teams have "cut their teeth" on large cases. By learning how documents apply to the pleadings, witness statements and expert reports, junior lawyers gain the opportunity to understand how case theory develops and gives them an insight into the commercial operations of their clients. Removing junior lawyers' opportunity to conduct first-level reviews will have consequences for their development and risks de-skilling them if this aspect of disclosure is not properly managed. Re-skilling lawyers so that they can analyse the results of AI-assisted e-discovery brackets (i.e., getting humans to conduct a second-level review) will be important to ensure that junior lawyers continue to learn about case theory and how to sift for truly relevant information.

Research

AI has a particularly strong use case in legal research, subject to the risks which are discussed in Section III, below. Clients with the budgets to access legally trained AI platforms may be able to dispense with outside counsel services for some research questions that they would ordinarily outsource. Being able to access complex legal analysis in a matter of moments rather than spending on junior lawyer research time may give clients enough of a steer to give them comfort in their decision-making. As such, research conducted by law firms will likely be confined to more nuanced, challenging questions of law which are not simply defined or answered. Given the ability of a large number of–at least institutional–clients to conduct their own initial research, firms will need to demonstrate clearly their "value-add" by their expertise in complex matters.

However, as the large–and growing–number of hallucinated case citations show, there are substantial risks in clients relying on open-sourced LLMs as their case-law source. Many law firms, instead, are collaborating with well-respected industry publishers who have a closed-source LLM working alongside those industry-publishers' case-law databases. With the right coding–and with specialist lawyers then interrogating the research–this type of hybrid, or limited AI may prove the best of both worlds. Law firms may still handle legal research, but when doing so will harness the power of an LLM to materially reduce the time spent trawling through case headnotes based on merely an index or a Boolean search.

Summarisation and drafting

AI has proved itself to be very useful in ingesting large amounts of data, whether that be multiple documents or long documents, and presenting summaries of that data which allows for quick understanding of their contents. This particular skill has a number of applications for international arbitration, including condensing long documents into a short summary that allows lawyers to assess their relevance, or taking a series of documents and then sorting and creating a précis of those documents in a chronology.

The taking of evidence will also become an AI-assisted endeavour. Witness interviews have long been an exercise in notetaking and remembering the nuance to be able to craft appropriate and helpful proofs of evidence. Video conferencing platforms, now common, are almost all equipped with AI-assisted transcription capabilities. Subject to security concerns, being able to generate an AI generated transcript of a meeting can speed up the process of generating witness summaries and proofs of evidence while also ensuring that vital information is not missed.

Nevertheless, AI transcription remains a work in progress. It routinely misses or mishears text, and often contains errors, particularly when specific and unique information is being discussed. AI transcription is limited only to certain languages (for instance, the authors have not seen a reliable AI transcriber that operates in Arabic), requires good internet connections and good microphones, and also relies on slow and clear diction. But in a limited way and at a low cost, they provide a certain level of accuracy that will allow the reviewer to revisit and get the gist of a conversation rather than having to create a note of the meeting from scratch.

When coupled with a summarisation tool, it is possible to create an AI-generated proof of evidence that can materially streamline the process of evidence-taking. Indeed, in light of recent criticisms from the English courts28 about witness statements failing to comply with English procedural law29 as to the requirement for a witness statement to refer to "matters of fact of which the witness has personal knowledge that are relevant to the case", there is a temptation to think that witness statement drawn from a verbatim AI transcript might be the best way to ensure procedural compliance. However, as the English courts have also said, "the best approach for a judge to adopt in the trial of a commercial case is, in my view, to place little if any reliance at all on witnesses' recollections of what was said in meetings and conversations".30

In the authors' experience, memories are seldom sufficiently linear or reliable to allow for a verbatim transcript of a witness's recollections to be produced as a witness statement in a case. While an AI transcript will help the witness's voice to be communicated in an authentic way, achieving an accurate and useable witness statement will still require a detailed review of the documents. For the time being, useful and workable witness statements will still require the direction of a lawyer to help guide the witness to focus on relevant facts and documents in a chronological and thematic way, rather than simply relying on one person's ephemeral recollections.

Hearings

AI has the potential to revolutionise how hearings are conducted. Technology now pervades hearings: electronic bundles, live transcripts and hybrid video feeds are the norm in arbitration in a post-COVID world. The next stage is to use AI during a hearing both to reduce cost and to increase efficiency in what is an all-consuming stage. AI transcription, as discussed above, could become an incredibly powerful tool once its accuracy rate improves. AI's sweet spot in analysing and summarising data means it is in prime position to quickly review a hearing transcript to pick out key themes relevant for future preparation and also any inconsistencies in the testimony given by witnesses or experts that can be seized upon to a party's advantage. Being able to quickly analyse evidence given on the stand against written statements is a game changer that will allow teams to use technology to gain an advantage during trials. In cases where there are multiple witnesses and experts, client representatives can obtain regular updates on the progress of the hearing and may notice key points which may warrant further reflection. In that scenario, using AI to quickly turn around a summary analysis following receipt of a daily transcript may give technologically literate teams the edge.

Risks

  • The highest profile risk when using AI, about which practitioners and clients may be preoccupied, is the problem of hallucinations–principally of hallucinated (i.e. fictional) case-references. Stories from jurisdictions around the world have already shown how lawyers can get themselves into a lot of trouble when using LLM research tools without proper scrutiny. As at the date of this article, there are already 358 cases worldwide in which a hallucinated case reference has been created by AI.31 Here are some high-profile examples.
  • In England and Wales:
    • A junior barrister was handed a wasted costs order for relying on five authorities that did not exist. The barrister has been referred to the Bar Standards Board for disciplinary action, and the High Court considered whether their conduct amounted to contempt of court.32
    • 45 citations within a witness statement drafted by a solicitor were found to be false in some way, including 18 which did not exist at all. The solicitor was referred to the Solicitors Regulation Authority for disciplinary action.33
  • In the United States:
    • A law firm and an individual attorney received a joint sanction of US$5,500 and a mandatory requirement to attend a course on the dangers of AI after filing a brief containing fake quotations and nonexistent authority.34
    • Three attorneys received public reprimands from the court for making false statements following the submission of two motions which contained fabricated citations. They were removed from the case and reported to the Alabama State Bar.35
  • In Canada:
    • A lawyer with over 30 years of experience relied on fabricated cases in a memorandum submitted to the court. The court stated that "counsel who misrepresent the law, submit fake case precedents, or who utterly misrepresent the holdings of cases cited as precedents, violate their duties to the court".36

A seasoned practitioner will understand that the phrase "don't trust, always verify" means that even human generated research should be properly vetted and stress tested to ensure accuracy not only of the answer but of the sources themselves. When it comes to using AI for research, tools that are specifically designed for legal practitioners are likely to yield more trustworthy results than open source LLM platforms. This is because "guard rails" have been developed around the training data collated for legal industry AI tools. Nevertheless, this is not an automatic guarantee of accuracy; checking source materials and conducting searches independently in legal databases for cited materials is vital for avoiding the embarrassment and potential sanctions from falling into the fake case-citation trap.  

Moreover, not only do sources require verification, but the answer to a research question generated by AI should not automatically be trusted to be correct. It is a well-known problem with LLMs that they will prefer to answer in the affirmative–i.e. to give you the answer that you want and to avoid telling you no. This is why it is so important to stress test the reasoning that has been given to you by the AI to ascertain whether it is a sound and defensible response. Even AI tools which are legally trained will sometimes use the wrong source material, or material that does not provide sufficient support for a proposition, to give an affirmative answer so as to please the user rather than answering in the negative or avoiding giving an answer at all. For example, you should check whether an answer comes from a valid case citation or whether it has come from a precedent document or template that has no legal force. The latter might be included in a legal database as part of the training data of a platform–and it can therefore still become a hallucinated response.

Senior lawyers need to understand how LLM platforms work and the type of results that are likely to be generated, so that they can properly supervise the juniors working for them and especially the next generation trainees, paralegals and junior lawyers. For them, using AI will be as normal as using email was to most senior lawyers at the start of their careers. It is likely that the next generation of lawyers will use AI far more readily than any other generation of practitioner. Being able to properly supervise and train these lawyers to spot the pitfalls of using AI when conducting "manual" research will be vital to ensure that there is not a drop in the quality of the supervision given to these junior lawyers, and therefore to the quality of their work product and ultimately the service to clients.

Perhaps even more troubling than accidental reliance on fake citations is the potential deliberate use of falsified evidence in legal proceedings. The alarming quality of so-called deepfakes, which seek to use the image or voice of somebody to generate something that that person has not actually said or done, poses an extreme risk to dispute resolution since the veracity of evidence may become increasingly questionable.37

When faced with a document that one party alleges is fraudulent (or which appears questionable), how does an arbitral tribunal carry out the task of ascertaining the veracity of the evidence? While tools exist which claim to be able to spot deepfakes, testing has shown that these platforms are not yet reliable when it comes to spotting falsified evidence and therefore of limited utility in these circumstances.38 39 40 If a tribunal cannot rely on technology to ascertain whether something has been created by AI, how can it equip itself to make that decision? Should it simply decide that the document in question holds no weight? Should it seek submissions from the parties on the issue? Should it engage forensic analysis? The answer to this will need to be one for each tribunal depending on the specific circumstances in each case.

Nevertheless, being alive to the evidential issues that AI can cause is already important, and will only become more so as the levels of AI content within arbitration increases. Within that context, the Global Investigative Journalism Network has released a guide to detecting AI-generated content,41 in which it identified seven categories of AI detection, and has advocated for three levels of checking based on the time available for review: a 30-second red flag check, a five-minute technical verification, and a deep investigation.

Maintaining data security and privilege is another area where practitioners will need to be extremely careful in the adoption of AI. Lawyers will need to ensure that their LLM products do not ingest privileged material into the training data and then apply that data in a way that inadvertently waves privilege. The security arrangements even for internal LLM platforms will need to be thoroughly scrutinised in order to assess the risks attached to them. External platforms will require explicit client waivers as to confidentiality, GDPR and privilege before data can be uploaded to a public LLM, given the public and accretive nature by which LLMs gather, store and share data. More importantly, however, when it comes to AI e-discovery, commentators have expressed that the use of such technology needs to be tested before the courts, and guidance and principles laid down to ensure that they can be utilised effectively without the risk of disclosing privileged information such as review logs and prompting. 

Given the plain language nature of prompting will bring review protocols closer in line with case strategies, there is a danger that using AI too liberally or without proper consideration of privilege may result in accidental over-disclosure of one's strategy to the other side. AI in e-discovery may have its place for the time being in an initial internal review stage where the initial universe of documents is analysed for relevance to the dispute and key documents. However, when the disclosure requests and Redfern schedules are in play, it is likely that the use of AI will be limited to avoid over-disclosure, and those who use it should proceed with caution to ensure that they do not give away more than they would want using search terms.

Regulations and procedures

As noted above, there is a growing call for arbitrators to use all the powers at their disposal to better control arbitration, while clients are calling for their lawyers to be innovative in their approach to dispute resolution. AI will prove itself to be a catalyst to this increased focus on the efficacy and efficiency of arbitration, if all parties involved look to use the tools at their disposal appropriately.

To that end, discussions regarding the use of AI in arbitration are likely to become part of the early conversations both with clients and, more importantly, with opposing counsel and the tribunal. Seeking to set the parameters for the use of AI is likely to become part of the negotiation of the terms of reference or first procedural order of an arbitration as parties seek to use the tools to their advantage whilst catering for ethical and legal obligations.

As arbitral institutions issue more guidance and rules on the use of AI, and national courts develop their approach to the use of AI in court proceedings, there will be a more detailed body of guiding principles that will help arbitrators to establish the boundaries for effective AI use. Guidelines already exist from certain arbitral institutions which provide guidance on what they consider to be effective governance of AI in international arbitration. Two major developments are as follows:

  • Silicon Valley Arbitration & Mediation Center42 were first-movers, consulting on their guidelines which were published on 30 April 2024. Among the provisions contained within the guidelines, there are provisions:
    • For parties and their representatives to demonstrate competence and diligence in the use of AI, and a respect for the integrity of the arbitration and the evidence used within it, placing a duty on practitioners to ensure that they understand the tools that they are using and safeguard against the inappropriate use of those tools either by a failure to interrogate the output from the AI, or by using AI in a way which harms the integrity of the arbitration, including by falsifying evidence.
    • For arbitrators, by forbidding the delegation of the decision-making function of their mandate to AI, and ensuring the integrity of the proceedings by avoiding information outside of the record being introduced through AI, and by ensuring the verification of sources.
  • The Chartered Institute of Arbitrators (CIArb) has issued guidelines43 which set out both the benefits and risks of AI in arbitration, recommendations for the proper use of AI, and addressing arbitrators' powers to give directions and rulings on the use of AI by parties in arbitration. The CIArb guidelines can be distinguished from the ICDR AI-arbitration product mentioned in paragraph 4, above, since they prohibit decision-making being delegated to AI. Instead, the CIArb guidelines provide arbitrators with tools by way of a template agreement on the use of AI in arbitration, and a template procedural order on the use of AI in arbitration. These templates allow for agreement of parameters for establishing either (i) which tools can be used by counsel, or (ii) which functions and tasks AI can be used for. It also provides a list of obligations on the parties to ensure that they understand the tools that they are to use, their limitations, and the impact of their use including ethical/bias concerns, confidentiality and data security, and a duty not to mislead. The template protocol also provides governance for so-called "High Risk AI Use", described as a use-case that risks a breach of privacy/confidentiality or data security obligations, the potential to undermine procedural integrity, or the potential to assert a nonhuman influence on the award.

Further publications have been issued by other global institutions, with many law firms and barristers chambers also now providing their own guidance on the use of AI in arbitration. As governments and international bodies issue laws and regulations on the use of AI (including the EU's AI Act), and national courts issue not only judgments on the use of AI but also guidance and practice directions on how it should be used in court, practitioners and arbitrators will need to stay abreast of their legal and regulatory obligations. This will help them to ensure that their use of AI in arbitration complies not only with the relevant arbitral rules governing their dispute, but also the law of the seat of the arbitration, the governing law of the arbitration, and indeed their own professional obligations.

Conclusion

If we are to meet the challenge that AI sets for us and also meet the expectations of our clients as they evolve alongside the development of these tools, it will not be enough to stick with "tried and tested", nor will it be sufficient to rely on specialists or younger team members who have more experience and facility in using AI technology. As the famous computer pioneer Admiral Grace Hopper observed, when commenting on the future of data processing as far back as 1976, "the most dangerous phrase a [data processing] manager can use is 'We've always done it this way.'".44 Arbitrators, experienced practitioners, experts and all levels of the legal profession–both in house and private practice–must make sure that they learn and appreciate the impact that AI is having and will have on how disputes are to be conducted. They need to learn this because AI is already here. Failing to understand it will not only mean being left behind, but may also run the risk of being caught out.

Footnotes

1. Ross, P. and Maynard, K., Towards A 4th Industrial Revolution, Intelligent Buildings International, Vol. 13, No. 3, Informa UK Limited, trading as Taylor & Francis Group (2021), pp 159-161.

2. Birol, F., Energy and AI, International Energy Agency, 4 April 2025.

3. MacDonald, A., AI-Powered Drone Swarms Have Now Entered the Battlefield, Wall Street Journal, 2 September 2025.

4. UK Courts & Tribunals, Judiciary: Artificial Intelligence (AI) - Guidance for Judicial Office Holders, 14 April 2025.

5. Brown, D., AI adoption soars across UK legal sector, Lexis Nexis, 25 September 2024.

6. Murray, S., AI's seismic effect changes client expectations of law firms, Financial Times, 25 June 2025.

7. Ross, A., GAR-LCIA roundtable: is it time for a reset?, Global Arbitration Review, 23 July 2024.

8. Sherman, J., Renehan, K., Lui, H. and Patel, A., The Guide to Evidence in International Arbitration - Third Edition: Using technology and e-disclosure, Global Arbitration Review, 9 September 2025.

9. Limond, K. and Calthrop, A., Artificial intelligence in arbitration: evidentiary issues and prospects, Global Arbitration Review, 9 September 2025.

10. Moody, S., ICDR to launch AI construction arbitrator, Global Arbitration Review, 18 September 2025.

11. Article 1450 of the French Procedural Code: "The duties of an arbitrator may only be carried out by a physical person enjoying the full possession of his or her capacity."

12. It is either an express or an implied understanding of sections 23A(1): "an individual who has been approached by a person in connection with the individual's possible appointment as an arbitrator"; and 24(1)(c): "that he is physically or mentally incapable of conducting the proceedings" of the UK Arbitration Act 1996 (as amended) that arbitrators must be people.

13. Article 11(1) of Law 2 of 2017, the Civil and Commercial Arbitration Law of Qatar: "The arbitrator shall be appointed from the arbitrators who are approved and registered in the registry of arbitrators at the Ministry. Furthermore, any other person may be appointed as an arbitrator if he meets the following conditions: (a) has full capacity..." - again, "any other person" who must have "full capacity" is either an express or an implied requirement that an arbitrator must be a person, unless the Qatari Ministry of Justice registers AI as an arbitrator.

14. Yovchev, I., Why Training Data is Important in Artificial Intelligence?, Digital Reality Lab, 21 June 2023.

15. Maheshwari, R., Advantages Of Artificial Intelligence (AI) In 2025, Forbes, 24 August 2023, updated 2025.

16. Petry, J. and Gassmann, T., Artificial intelligence in dispute resolution: developments, challenges and perspectives for legal practice, Reuters, 11 July 2025.

17. Shoukat, D., Using AI in International Arbitration: From Predictive Analytics to Automated Awards, Social Science Research Network, Elsevier Inc., 26 July 2025.

18. Puertas Álvarez, O., The Crystal Ball: AI Predictive Analytics in Arbitration: Navigating Promise, Pitfalls and Paradigm Shifts, Global Trade and Customs Journal, Vol. 19, Issue 11/12 pp 738-747.

19. Mitchell, T., Artificial Intelligence and the Future of Construction Dispute Resolution, Construction Management Association of America - Member Communication Experience , 31 March 2025.

20. Reynolds, A. and Melendez, P.,AI arbitrator selection tools and diversity on arbitral panels, International Bar Association.

21. Johns, J. B., Artificial Intelligence in the Selection of Arbitrators: Whether to Trust the Machine, Institute for Transnational Arbitration - ITA in Review, Vol. 6, Issue 3, 2025.

22. Nick, L, AAA-ICDR® Launches New AAAi Panelist Search to Enhance Panelist Selection with AI Technology, American Arbitration Association, 10 October 2024.

23. Jus Mundi - AI-Powered Search for International Law and Arbitration.

24. Paisley, K. and Sussman, E., Artificial Intelligence Challenges and Opportunities for International Arbitration, New York State Bar Association - New York Dispute Resolution Lawyer, Vol. 11 No. 1, Spring 2018.

25. ibid.

26. Mehrabian, A., Silent messages: Implicit communication of emotions and attitudes, Wadsworth, 1981.

27. Farouk, M., An Artificial Intelligence Tool for the Selection of Delay Analysis Technique in Construction, American University in Cairo - AUC Knowledge Fountain, 31 January 2023.

28 (1) Fulstow and (2) Woods v. Francis [2024] EWHC 2122 (Ch), David Stone (sitting as Deputy High Court Judge) at paragraphs 26-36.

29. UK Ministry of Justice, Civil Rules and Practice Directions - Practice Direction 57AC: Trial Witness Statements in the Business and Property Courts.

30. Gestmin SGPS S.A. v. (1) Credit Suisse (UK) Limited and (2) Credit Suisse Securities (Europe) Limited [2013] EWHC 3560 (Comm), Leggatt J. at paragraph 22.

31. Charlotin, D., AI Hallucination Cases Database, data viewed on 12 September 2025.

32. R. (on the application of Ayinde) v London Borough of Haringey [2025] EWHC 1040 (Admin).

33. Joint hearing of R. (Ayinde) v. London Borough of Haringey, and Hamad Al Haroun v (1) Qatar National Bank QPSC & (2) QNB Capital LLC [2025] EWHC 1383 (Admin).

34. In Re Marla C. Martin (2025) Case No. 24 B 13368, United States Bankruptcy Court, Northern District of Illinois, Eastern Division (18 July 2025).

35. Johnson v Dunn (2025) Case No.: 2:21-cv-1701-AMM, United States District Court, Northern District of Alabama, Southern Division (23 July 2025).

36. Ko v Li [2025] ONSC 2766.

37. Swerling, G., Doctored audio evidence used to damn father in custody battle, The Telegraph (UK - online), 31 January 2020.

38. Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O., `igut, P. and Waddington, L., Testing of detection tools for AI-generated text, International Journal for Educational Integrity, Vol. 19, Article 26 (25 December 2023).

39. Runyon, N., Deepfakes on trial: How judges are navigating AI evidence authentication, Thomson Reuters, 8 May 2025.

40. The Problems with AI Detectors: False Positives and False Negatives - Generative AI Detection Tools, Guides at University of San Diego Legal Research Center.

41. van Ess, H., Reporter's Guide to Detecting AI-Generated Content, Global Investigative Journalism Network, 1 September 2025.

42. SVAMC Guidelines on the Use of Artificial Intelligence in Arbitration, Silicon Valley Arbitration & Mediation Center, 30 April 2024.

43. Guideline on the Use of AI in Arbitration (2025), The Chartered Institute of Arbitrators (UK), 19 March 2025.

44. Surden, E. quoting Admiral Grace Hopper, Privacy Laws May Usher In 'Defensive DP': Hopper, Computerworld, p. 9, 26 January 1976.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More