ARTICLE
24 September 2025

The Use Of AI By Arbitrators: Exploring The Opportunities And Managing The Risks

AG
Addleshaw Goddard

Contributor

Addleshaw Goddard is an international law firm, almost 250 years in the making. We're trusted by over 5000 organisations, including 50 FTSE 100 companies, to solve problems, deliver deals, defend rights, comply with regulations and mitigate risk. Our work spans more than 50 areas of business law for clients across multiple industries in over 100 countries worldwide. And while the challenges our clients bring us may vary, we approach and solve them with the same, single-minded focus: finding the smartest way to achieve the biggest impact.

The use of AI in the legal industry, including in international arbitration proceedings, is widespread and continues to evolve rapidly.
United Kingdom Technology

The use of AI in the legal industry, including in international arbitration proceedings, is widespread and continues to evolve rapidly. But while the use of AI by arbitrators offers opportunities, it also gives rise to unique challenges that require careful consideration and management. This p examines how arbitrators can make use of AI and the difficulties that arise from their doing so. It considers how to manage the use of AI by arbitrators and the challenges that may arise from misuse or lack of transparency.

Introduction

The use of artificial intelligence (AI) in international arbitration proceedings is a topic that has received substantial attention in recent years as AI tools have become increasingly available and sophisticated. Key areas of debate have included the implications of AI for the authenticity of evidence, the challenges that the use of AI presents to an equality of arms between the parties, and the capability of AI to adjudicate disputes. The focus of this p is on one aspect of the discussion: the use of AI by arbitrators.

AI is already being used regularly in arbitral proceedings, including by arbitrators. The consensus of the arbitral community, as in many other areas of the legal industry, appears to be that it is necessary to embrace the use of AI. The use of AI by arbitrators looks set to continue: in the 2025 Queen Mary University International Arbitration Survey, 52% of respondents considered that arbitrators will increasingly rely on AI over the next five years. As parties look to AI-driven efficiencies and improvements in output from their counsel, it is unsurprising that arbitrators are also looking to AI to enhance their own 'service' offering.

Yet while AI offers opportunities, it also presents challenges that requires management. Arbitrators, and those that appear before them, must be aware of the risks of the use of AI by arbitrators and how those risks are best managed. For while AI may be able to assist arbitrators in performing their functions, the use of AI must be approached with care to avoid undermining the foundational principles that underlie the arbitral process.

The use of AI by arbitrators

The availability and scope of AI tools has in recent years expanded exponentially. There is now widespread access to AI-powered tools of varying levels of sophistication, which are being used by parties, and can be and are being utilised by arbitrators. The ability of AI programs to digest and summarise masses of data, and of generative AI applications to provide explanations and create content, make AI tools potentially well-suited to the workload of a tribunal.

The use of AI by judicial or quasi-judicial decision-makers is not unique to the arbitration setting. English court judges have likewise been attracted to the use of AI tools to assist their functions. Indeed, guidance to judges in England & Wales that was issued in 2023 and recently updated suggests that "there is no reason why generative AI could not be a potentially useful secondary tool". The guidance does, however, warn of the perils of the use of AI by the judiciary, and the same is true of the use of AI by arbitrators.

There are a multitude of tasks that an arbitrator is required to perform that might be well-suited to AI support. An AI tool could conceivably assist an arbitrator with, amongst other things: (i) performing administrative tasks, such as transcribing a meeting/hearing or tracking compliance with procedural orders; (ii) sorting, categorising or summarising large volumes of evidence and/or pleadings, such as preparing chronologies or identifying key issues; (iii) conducting legal research or analysing he parties' legal arguments; and (iv) assessing and evaluating the parties' cases, including drafting parts of an award. As AI technologies develop, so will the possible uses of AI by arbitrators.

The advantages are clear. The use of such tools by arbitrators may allow a busy arbitrator that is faced with large volumes of materials to offer a more efficient service that is focussed on evaluation and decision-making. The use of AI can reduce time spent on administrative and process-based tasks, by acting effectively as a low cost tribunal secretary. It could potentially go further, in helping a tribunal digest or analyse the parties' factual or legal positions, which are often advanced over countless intricate pleadings and statements/reports. In turn, the use of AI may result in lower costs for the parties (if the tribunal is remunerated on an hourly basis) and quicker and higher-quality decision-making.

The challenges for the use of AI by arbitrators

The use of AI by arbitrators does, however, present a variety of challenges that requires careful consideration and management. A tribunal is obliged to maintain the integrity of the proceedings, which can include the obligations not to delegate decision-making, to preserve confidentiality and to remain independent. The tribunal must also deliver an enforceable award. The use of AI can generate friction with, or potentially entirely undermine, these responsibilities.

Delegation of tribunal decision-making

A particular challenge arises in the use of AI in tribunal deliberations and decision-making. AI may be used to draft first or final versions of parts of an award. In doing so, it may offer to the tribunal a view on an appropriate outcome or perform a step that is an integral part of cognitive decision-making (e.g. drafting). The result is that the process of tribunal decision-making is potentially delegated to a tool powered by AI.

Further, AI may be used to digest and summarise the parties' arguments and evidence. The result may be that arbitrators come to rely on AI generated summaries of legal arguments and factual evidence rather than analysing the primary materials themselves.

The delegation of tribunal decision-making is not a new topic. In 2017, Mr Justice Popplewell (as he then was) addressing the analogous issue of the use of tribunal secretaries in international arbitration, referred to the "non-delegable and personal decision-making function" of a tribunal. He warned of the "real danger of inappropriate influence over the decision-making process by the tribunal, which affects the latter's ability to reach an entirely independent minded judgment" (P v Q (2017)).

Yet the availability and breadth of AI tools and their speed and seeming reliability of output may present a more wide-ranging threat to tribunal decision-making than that of tribunal secretaries.

Integrity of the arbitral process

The use of AI may undermine the integrity and credibility of the arbitral process in various other ways.

The use of AI can impair confidentiality in the proceedings. The uploading of confidential data to an AI platform (such as a large language model like ChatGPT) might cause concerns over how that data will be used and stored and whether a third party may have access to it.

The over-reliance on AI tools can lead to errors in awards. Errors might arise where an AI tool undertakes an incomplete analysis of the papers or generates 'hallucinations' that are overlooked by the tribunal. A well cited example is fabricated case law.

The use of AI can lead to output that is not able to be readily understood or explained, because the algorithm that has been used is not easily explainable and the data that it has access to is not fully understood. When an arbitrator uses an AI tool that does not explain how it has reached a particular outcome (the 'black box' problem), the requirement for a 'reasoned' award can be undermined.

AI can interfere with the adversarial process, to the extent that a tribunal comes to rely on AI for summaries or explanations, rather than the parties themselves. A simple example is where a tribunal turns to an AI tool to explain a technical term in proceedings, rather than putting the question to the parties or their exerts. This goes to the right of the parties to present their case to the tribunal and to the tribunal's obligation to consider and address the parties' arguments.

Finally, there is a concern that the use of AI may increase the risk of bias in arbitral proceedings if relied on without critical analysis. This arises from the inherent biases in the data sets that an AI tool may be trained on and that may not be readily apparent to the users of the tool.

Managing the use of AI by arbitrators

The challenges presented by the use of AI by arbitrators can and should be managed to uphold integrity in the arbitral process. There are various ways of doing so, which all serve the purpose of increasing transparency over tribunal conduct and ensuring that the tribunal's conduct is aligned with the parties' expectations.

There are four principal ways that the use of AI by arbitrators can be managed: (1) tribunal-imposed protocols or procedural orders; (2) the adoption of guidelines for specific proceedings; (3) the imposition of specific requirements by institutions; and (4) legislating for the regularisation of AI by arbitrators.

(1) Tribunal-imposed protocols or procedural orders

At an early stage in the proceedings, a tribunal may wish to consider (or the parties might want to prompt consideration of) whether the tribunal's use of AI tools should be addressed openly with the parties. The same can be said of the parties' use of AI tools. This topic might most naturally fall within a discussion around and the drafting of the first procedural order.

Tribunals have broad powers to manage arbitration procedure. Such powers will generally be broad enough to include the making of directions on the use of technology in the proceedings. However, tribunals can be reluctant to discuss their use of technology with parties or consider this a topic that is matter of tribunal discretion.

The key issues that may be addressed in a first procedural order include how the tribunal intends to use AI, the degree of disclosure to be given of the use of AI by the tribunal, and the safeguards that will be put in place to protect the parties' data and the process. It may be that the parties are also given an opportunity to object to the use of AI tools, or particular tools, by the tribunal.

(2) The adoption of guidelines

The tribunal, together with the parties, may consider adopting one of a small numbers of guidelines that now exist in relation to the use of AI by arbitrators (and in arbitration proceedings more generally). These guidelines are new, and lacking in detail, which reflects the relatively nascent use of AI in arbitral proceedings. The number of guidelines is expected to grow in the coming years, as institutions grapple with the difficult issues presented by the use of AI.

Of particular note are the following guidelines, which have been introduced in the past 18 months, as institutions have rushed to address the use of AI in arbitration proceedings:

  • In April 2024, the Silicon Valley Arbitration and Mediation Center introduced Guidelines on the Use of Artificial Intelligence in International Arbitration (SVAMC Guidelines). The SVAMC Guidelines provide that arbitrators can use AI tools, but that they should not delegate their decision-making responsibilities to an AI tool. Arbitrators must review the output of AI tools and evaluate the reliability of AI-derived information independently and critically. The SVAMC Guidelines further provide that arbitrators should not rely on AI-generated information without making "appropriate disclosures" and allowing the parties an opportunity to comment.
  • The Stockholm Chamber of Commerce Arbitration Institute issued guidance in October 2024 (SCC Guidance). The SCC Guidance encourages tribunals to bear in mind (i) the consequences of the use of AI for confidentiality, (ii) the need to review and verify the output of AI tools to avoid biases and false information, (iii) the benefit of disclosure of the use of AI in upholding the integrity of the process, and (iv) that AI should not be used to delegate decision-making.
  • The Chartered Institute of Arbitrators Guideline on the Use of AI in Arbitration was introduced in March 2025 (CIArb Guideline). The CIArb Guideline encourages parties and arbitrators to discuss the use of AI by arbitrators, so that the parties have an opportunity to comment and object to its use. If AI is agreed to be used, it requires arbitrators to refrain from using AI in a way that could compromise the integrity of the proceedings or the enforceability of an award, such as by delegating tasks to AI tools that could influence decision-making. The CIArb Guideline requires that AI only be used as a "supportive tool" and that arbitrators verify AI-generated information and are responsible for all aspects of an award.
  • Also in March 2025, the International Centre for Dispute Resolution issued guidance to arbitrators on the use of AI tools (AAA-ICDR Guidance). The AAA-ICDR Guidance "encourages arbitrators to embrace this technology", which it suggests may lead to "greater precision and efficiency". It requires arbitrators to evaluate/verify the output of AI tools, maintain fairness/due process in the arbitral process, retain control over decision-making, disclose the use of "generative AI tools when such use materially impacts the arbitration process or the reasoning underlying their decisions", and safeguard confidential information. The AAA-ICDR has gone further than other arbitral institutions by offering certain AI tools to arbitrators for free.
  • The Vienna International Arbitral Centre published a note on the use of AI in April 2025 (VIAC Note). The VIAC Note emphasises the need for arbitrators to retain control over decision-making and for AI tools to facilitate, rather than replace independent analysis by arbitrators. The VIAC Note also refers to the need for arbitrators to maintain confidentiality and to use AI tools responsibly. The VIAC Note suggests that arbitrators "may" disclose the use of AI tools.

(3) Greater regulation by institutions

While the above options are non-binding mechanisms for addressing the use of AI, it may ultimately be preferable to take regulation of this topic out of the hands of tribunals. This could occur through the adoption of specific requirements by arbitral institutions, which would apply to arbitrations being conducted within their ambit.

For example, institutions could set out minimum requirements for the use of AI by arbitrators or could require arbitrators to confirm, when they are appointed, that they will not use AI in the conduct of certain tasks (e.g. drafting). Further, institutions might prescribe that the use of AI by arbitrators is a topic that is required to be raised with the parties and addressed at the time of the discussion of procedural order no. 1.

(4) Legislative intervention

It is also possible that the rush to regulate AI through legislation and regulations will capture the work of arbitrators. By way of example, the EU AI Act (the AI Act) that will come into effect in 2026 aims to provide a uniform legal framework in relation to the use of AI systems in the EU. The AI Act describes various "high-risk AI systems", which includes AI systems used by a judicial authority or in alternative dispute resolution (such as in arbitration) to research and interpret facts and the law, or in applying the law to a concrete set of facts. If a tribunal were to make use of such high-risk AI systems during arbitration proceedings, it would have certain obligations under the AI Act, including a requirement to maintain a compliant quality management system and to ensure that the high-risk AI systems comply with the broader requirements of the AI Act.

Failure to manage the risks presented by the use of AI by arbitrators

Managing the risks of the use of AI by arbitrators is critical to ensuring transparency and to complying with the foundational principles of arbitration such as party consent, procedural fairness and arbitrator independence. It is expected that tribunals will increasingly address the use of AI (by themselves and the parties) at the outset of proceedings, to avoid difficulties down the line. However, with greater transparency comes an increased risk of legitimate – and illegitimate – due process challenges by parties.

The use of an AI tool by an arbitrator in a manner that is contrary to party expectations could conceivably lead a party to apply to remove the arbitrator, set aside an award, or challenge the enforcement of an award. Such challenges are particularly foreseeable when a tribunal has effectively delegated some important aspect of its decision-making to an AI tool, such as the evaluation of facts or law or the application of the law to the facts. The risk of satellite proceedings may be more likely where an arbitrator has not disclosed his or her use of an AI tool and where its use has materially affected the outcome of the case.

For arbitrations seated in London, the likely recourse of a disgruntled party will be to: (i) seek to remove the arbitrator for "failing properly to conduct proceedings" (p 24(1)(d)(i) of the Arbitration Act 1996 (the Act)); or (ii) challenge an award on the grounds of a serious irregularity for the failure of the tribunal to comply with its general duty to the parties (p 68(2)(a) of the Act). No such challenges have been made to date in this jurisdiction based on the use of AI. However, the validity of an award has been challenged in a recent petition before the California Southern District Courts based on the alleged use of AI to draft all or part of an award (LaPaglia v Valve Corporation).

Guidance can be drawn on this topic from the approach of the English courts to the use of tribunal secretaries, which are used by tribunals for administrative functions, but whose role can stray into more substantive tasks. As noted above, the courts have emphasised the need of an arbitrator not to impair its decision-making function by delegating decision-making to a tribunal secretary. A similar line is expected to be taken in respect of the use of AI tools.

Conclusion

AI, and technology more generally, can be used by arbitrators as another tool in their arsenal to ensure that an award is delivered in an efficient and effective manner. It clearly has a role in the procedural and administrative functions of a tribunal. Yet, given the unique status of a tribunal, as the product of party consent and with duties to the parties, care must be taken to avoid undermining the integrity of the process. Arbitrators that come to rely on AI in the place of party assistance and their own faculties are unlikely to deliver the expectations of users for a fair and just process.

As the use of AI becomes more widespread, the tools that are available to arbitrators will become more advanced and available. In this context, there is an obligation on inpidual arbitrators to use AI responsibly. Parties must also hold their tribunal to account by being proactive and raising the topic of AI at an early stage. Arbitral institutions can and should also play a greater role in overseeing and regulating the position, to bring uniformity and best practice to the approach that is to be taken. Absent proactive engagement with this topic and increased guidance and regulation, it is expected that challenges to the arbitral process with an AI-nexus will only increase as the use of AI becomes ever more prevalent.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More