Artificial intelligence ("AI") has rapidly become a force revolutionising various areas of life, and it is crucial to analyse its impact on the development of international arbitration.

The Silicon Valley Arbitration and Mediation Center ("SVAMC"), a non-profit foundation dealing with technology-related disputes, has recently published draft guidelines on the global use of AI in arbitration proceedings (the "Draft Guidelines"). The SVAMC Committee highlighted that the Draft Guidelines are open for public discussion and collaboration, thus demonstrating its acknowledgment of both technological advances and the need to balance them reasonably with the established principles of law practice. Guideline 1(1) discusses understanding the uses, limitations, and risks of AI applications; guideline 1(2) focuses on safeguarding confidentiality; and guideline 1(3) zooms in on disclosure and protection of records.

Novelty

The draft Guidelines propose two interesting alternatives of disclosure rules that might apply when using AI in arbitration proceedings. Option A stipulates that the utilisation of AI tools may be appropriate in certain circumstances depending on the function for which such tool is used and other relevant factors. On the other hand, option B allows the disclosure when the AI tools could have a material impact on the proceedings and/or their outcome, such as the preparation of submissions, expert opinions, and other documents that are materially relied upon.

This may be a direct consequence of an increasingly important discussion on the potentially beneficial (or detrimental) impact of each option: for instance, whether it would be lawfully and ethically correct to apply AI tools in the preparation of pleadings? And if so, to which extent the AI could be relied upon? This could provide ground for a solid debate and reflections on which of two options should be implemented as well as which scope of regulation of AI in arbitration proceedings should be established.

The primacy of arbitrators

A crucial provision in the Draft Guidelines concerns the role of arbitrators in the light of the rising importance of AI and this provision stipulates that "[a]n arbitrator shall not delegate any part of their personal mandate to any AI tool. This principle shall particularly apply to the arbitrator's decision-making function." Further guidance is stipulated in respect of due process and confidentiality in arbitration.

List of examples

The Draft Guidelines provide for a non-exhaustive list of examples of the risks and limitations of AI when used by parties and arbitrators. For instance, "using AI tools to summarise cases and copy-paste them into pleadings without verifying whether AI's output may contain any errors" is frowned upon.

More is yet to come

The Draft Guidelines are in the process of being finalised but the issues they raise seem to be excellent points for discussion. All stakeholders in arbitration, including parties and arbitrators, may soon begin to deal with the swift input of AI and, hence, will have to rediscover all the essence of lawyers' skill and creativity.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.