ARTICLE
24 August 2023

Arbitration And AI: Outside Of The (Black) Box

IL
IPG Lex&Tax

Contributor

IPG Lex&Tax
Offices based in: Turin, Milan, Rome, Verona and Naples.
Abstract: Artificial Intelligence, also known as AI, has become an increasingly popular instrument in the field of arbitration...
Italy Litigation, Mediation & Arbitration

Abstract: Artificial Intelligence, also known as AI, has become an increasingly popular instrument in the field of arbitration, being declined a growing number of tools that contribute to ease the work of counsels and arbitrators alike. However, the ability of AI in making complex calculations also raises concerns about the explainability of the results produced in the context arbitration, whereby fairness and transparency are paramount in each stage of the proceedings. This article analyses the so called "black box" issue in international arbitration vis-à-vis some of the key sets of arbitration rules, the New York Convention and the current proposal of AI Act of the European Union.

Artificial Intelligence ("AI") has become an increasingly popular tool in the field of arbitration, particularly and most famously for the purpose of automating the decision-making process, but also - and practically more often - for the purpose of supporting arbitrators and counsels alike in facilitating specific phases of the proceedings, from the automated collection and analysis of relevant case law to the evidence processing systems.

However, even in these less controversial fields, the use of AI in arbitration raises concerns about the transparency and fairness of the proceedings.

One of the primary concerns is the so-called issue of "black box" in the use of AI in arbitration.

The term "black box" refers to the inability of human beings to fully understand how a decision is elaborated by an AI system. Machine learning, as the key way to train AI systems, is based on feeding the system with large datasets, so that the machine can "learn" from a big number of data and elaborate a pattern to sort out a real problem1.

Therefore, the outcome provided by AI tools is often the result of calculations concerning such amounts of data as to be practically unbearable for a human brain.

One might argue that this is the very reason why AI exists, yet it can also be argued this is the very reason that makes it extremely difficult for humans to describe the exact logical path followed by an AI in delivering a certain outcome. In the context of arbitration, typically a human-driven kind of activity, this lack of transparency can lead to a loss of confidence in the process and potentially challenge the validity of the arbitration award.

The "black box" in European Union Law: an issue of transparency

Before delving into the topic of how the "black box" issue may impact on arbitration, it might be useful to explore how this issue is dealt with in the existing body of European Union law which, for the moment, appears to be the most advanced in this subject on the international plane.2

Notably, European law-makers have posed prominent attention on the concept of transparency since the early attempts to provide AI with a legal framework.

For example, both the White Paper on Artificial Intelligence and the Framework of Ethical Aspects of AI of the European Parliament issued in 2020 called for specific regulation to overcome the "opaqueness" of AI and prevent discrimination and bias through "explainability" (also known as "interpretability"), yet they delegated the uneasy task of putting such concepts into binding provisions to the European Commission, leaving it with little practical indications on how to do so.3

The European Commission has taken over this assignment quite remarkably, by including specific language on AI explainability in the current draft of the Regulation of the European Parliament and of the Council Laying Down Harmonised Rules On Artificial Intelligence (the "AI Act"), first published in April 2021.

After being heavily reviewed by both Council and more recently by Parliament, the AI Act is currently subject to the so called "trilogue" negotiation, aimed at finding a common position among the representative of the law-making bodies of the EU, with final adoption expected to take place by the end of 2023.4

The latest draft of the AI Act takes up the approach of the initial draft issued by the Commission and develops the concept and requirements of AI explainability in a considerable way.

In this regard, it should be noted that art. 13 of the AI Act defines the standard of transparency to be employed in "high risk AI systems" (one of the three levels of AI risk identified in the AI Act, which encompasses the administration of justice, thus indirectly embracing arbitration) by stating that they "shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable providers and users to interpret the system's output and use it appropriately".

Art. 14 declines what the principle set out in art. 13 means in practice. In accordance with this provision, an AI system is transparent and explainable when it allows its human supervisor to:

  • understand its capacities and limitations and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible. In this regard, the latest iteration of the AI Act draft clearly states that AI systems should contain an "emergency break" to halt the machine in the event of anomalies;
  • prevent over-reliance on the outcome produced by the machine (the so called "automation bias"), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;
  • be able to correctly interpret the high-risk output of the AI system;
  • be able to decide not to use the AI system or otherwise disregard, override or reverse its output.5

In particular, points b), c) and d) are critical for arbitration, which is a context where one or more human beings might be called to take decisions based on information processed by AI, meaning that they should also be able to explain how the related outcome was achieved or even reverse it, if necessary.

Can AI be an arbitrator?

One initial aspect that needs to be clarified in analysing the issue of the "black box" in arbitration is whether such issue can be eliminated at the source by substituting a human arbitrator with an AI, thus taking away the problem of explaining the related outcome.

In this regard, it should be noted that, while the 1958 New York Convention on the Enforcement of Arbitral Awards does not seem to prescribe that an arbitrator be a natural person, the same Convention expressly provides under art. V (ii) b that an arbitral award might be vacated when the same is against the public policy of the country where enforcement is sought.6

In most jurisdictions, the law appears to prescribe - either directly or indirectly - that only humans can serve as arbitrators, thus leading to the conclusion that a motion to vacate an award issued by an AI-only tribunal would likely be upheld, although the exact reason behind such vacatur may vary depending on the country.7

For example, the Dutch Code of Civil Procedure, the French Code of Civil Procedure, and the Portuguese Voluntary Arbitration Law lay down the requirement of a natural person with full capacity to act as an arbitrator.

Countries like Vietnam, China, South Korea and Indonesia have stated in their arbitration law a specification for qualifying as an arbitrator. This qualification is regarding the mandatory experience of being a judge or a lawyer for certain number of years, or having specialised knowledge in a particular field of law.

In a similar manner, arbitration laws of Sweden, Finland, Iceland, Egypt and Italy establish that a person needs to be in full capacity to act as arbitrator: as AI bears no legal personality, in such countries it would not be lawful for an AI to serve as arbitrator.

Furthermore, the human community is reluctant to entrust "sentient" machines with some activities crucial for the survival of mankind and for the regulation of life in common, even when such an assignment would lead to significant savings in terms of time, cost and efficiency.

Think of how, for example, although the technological barrier has long since fallen, planes cannot be piloted by artificial intelligence alone and in the absence of a human commander.

From the above, it can be inferred that the dystopian scenario of robots fully taking over the role of decision-makers in arbitrations proceedings is unlikely to materialise in the near future.

Yet, what about the case of human arbitrators relying upon the outcome of AI for the purposes of decision making in the various phases of the proceedings?

The "black box" in arbitration

As mentioned above, the issue of the "black box" is particularly relevant in the use of AI for the assessment of evidence and in the decision-making phase.

AI systems can analyse large amounts of data and identify patterns and correlations that may not be immediately apparent to human arbitrators, which is in itself a good thing for justice.

However, the use of AI in this way can also make it difficult for parties to understand how the decision was actually reached, in particular when the same was the outcome of the non-transparent process of an AI application in the course of certain phases of the proceedings, such as:

  • the selection of arbitrators among a roster through a recommendation based on their areas of expertise, a field where discrimination is more likely, given the undeniable bias of the environment towards white male arbitrators;8
  • the evidentiary phase, whereby the calculation abilities of AI might come handy for such functions as recognising the authenticity of a signature in a document or in spotting the contradictions of a witness statement against other documents;
  • in online hearings, whereby the transcription, instant translation and voice recognition services are more and more delegated to AI tools, with the inherent risk of mistakes given by wrongful interpretation or misattribution of statements;
  • in the legal research phase on which the decision is built, whereby AI can notoriously operate in a predictive function by analysing a string of cases and outlining a pattern which might have gone unforeseen to the human eye.

A key aspect common to all these function is the potential for bias, which is the main problem relating to the black box in the use of AI: AI systems can only make decisions based on the data they are trained on, which means that if the data is biased or vitiated in any way, the decision-making process will also be biased or vitiated.

This can lead to more or less obviously unfair or discriminatory outcomes, particularly in cases where the decision involves sensitive issues such as race, gender, religion or health: for example, would an AI really be able to tell the difference between a false signature and one made by a diseased person?

Even when AI is involved, a judgement is still the sum of "what the judge had for breakfast", although a very large one where detecting which ingredient was used for which recipe could be a bit of a challenge.

Think of the case where the decision recommended by the AI is based on thousands of documents of discovery originating from different jurisdictions, or a case where a certain witness statement is made more relevant than the others due to a pattern in the applicable case law that only AI was able to identify, thus undetectable for an arbitrator.

The above scenario needs to match the obligation of arbitral tribunals to provide reasoned awards, which is common to the vast majority of sets of rules existing in international arbitration and in national arbitration laws.9

Lacking this element, an award elaborated with the key contribution of an AI system could potentially be vacated on enforcement, or be subject to annulment when such mechanism is provided by the relevant arbitration rules (for example, in accordance to the ICSID Rules of Arbitration10).

In order to prevent such scenario from materialising, it is for arbitrators and arbitral institutions to prioritise transparency in the development and use of AI systems in arbitration by making a reasoned selection of the tools to be employed in the course of the proceedings, not differently from what lawyers and other professionals are compelled to do in advising a client.

Far from being a generic recommendation, this concept is soon likely to translate into hard law obligations aimed at qualified users, who are addressees of specific obligations under the AI Act.

In this regard, the current version of the AI Act seems to identify at least three levels of obligations which might be applicable to the use of this technology by arbitrators:

  1. choosing the AI system to be implemented: pursuant to art. 29.1 and 29.2 if the AI Act, the user (called deployer in the Parliament's version of the draft) is required to implement the technical measures necessary to operate the instructions accompanying the AI system, including appointing a "competent" human for oversight. This, in a nutshell, means that the approach of the arbitrators and tribunal institutions to AI cannot be the same of a mere software licensee, for it requires a real technical insight into the tool, in order to interpret the instructions, understand how they operate in practice and ensure that the system is used in a manner compliant with the law, starting with verification of the default compliance of the system. In addition, it is worth noting that if the arbitration institution acts as a supplier of proprietary AI, or modifies an AI system to which it has subscribed, the same will also be subject to numerous other obligations relating to AI producers;

  2. operating the selected AI: art. 29.3, 29.4 and 29.6 of the AI Act state that the user must ensure that (i) the data entered into the AI system is relevant for the purposes for which the system was adopted, carrying out an impact assessment (DPIA) pursuant to of the art. 35 GDPR where necessary, and (ii) the operations of the AI system, on the basis of the instructions, must always be supervised, promptly notifying the distributor or supplier where, in particular, the user warns that there is a risk to health, safety or protection of the fundamental rights of the subjects involved.

    This, however, puts the arbitrator in the difficult position of having to make sure that the training data entered in the AI (i) have a lawful origin and are processed in accordance with the law, (ii) are qualitatively suitable for the function entrusted to AI, (iii) do not violate the rights of third parties and, in particular, do not contain inherent bias;

  3. implementing the output of AI in an arbitration: this particular moment pertains to the verification of possible accidents, including those generated by bias. In this regard, art. 29.5 of the AI Act requires the user to keep the logs (automatic event recordings) automatically generated by the AI system for a duration compliant with applicable law and, in parallel, forces the producers of AI to provide it with default automatic log recording mechanisms. In this regard, it will be very interesting to see how such obligation and the possible use of the recorded data as evidence (for instance in the motivation of an award or in an annulment procedure) might apply in practice.

Independently of what the final text of the AI Act will be, the standard of diligence for the qualified user implied in in its principles seem to go much further than those usually applied and to provide a deeper meaning to the obligation to provide understandable and reasoned awards, implying an entirely new degree of diligence be demanded of arbitrators and arbitral institutions alike, at the risk of expanding the scope of the current grounds for denying enforcement to an award.

Conclusions

It is very hard to predict the exact scale of the impact that AI might produce in arbitration.

Yet, it is very apparent that the potential aspects for which it might be used could be many and that they could improve the way proceedings are currently conducted in several ways: cost reduction, expeditiousness, fairness could definitely be some of the upsides of a correct use of AI in arbitration.

Indeed, one might argue that the role implicitly played in this context is that of unearthing biases and irregular patterns that have long gone unnoticed, thus forcing practitioners to sort them out once and for all.

Yet, for this very reason, its implementation cannot be conducted lightly or without a proper set of technical skills.

This seems to be the overarching principle behind the choice that the current draft legislation on AI at the EU level is bound to make: separating the production stage of the AI from the use one, without depriving qualified user of a certain degree of responsibility for the way AI is implemented in high risk contexts.

Indeed, just like a surgeon has to carefully choose and handle its tools while working on a human body, so arbitrators and arbitral institutions need to do while choosing to use a certain AI application in a critical phase of the proceedings.

In this regard, it is interesting to notice how the new draft of the AI Act contains some interesting proposals which point in the direction of lightening the role of the users by putting the accent on compliance by default.

Some of these aspects will directly touch the field of arbitration and of the administration of justice, such as the obligation of deployers who intend to adopt a high risk AI system to do a human rights impact assessment11, or the obligation of producers to adopt specific schemes of compliance by certification, which will also be accompanied by the possibility of obtaining a preliminary decision by the competent supervisory authority on the riskiness of the AI system to be put on the market.12

These instruments might play a very valuable role when it comes to proving the ability of an AI to issue unbiased and unvitiated results, possibly being trained not only to produce a certain outcome, but also to provide a human-digestible explanation of how that result was obtained (for example, outlining exactly how a certain precedent was relevant for reaching a decision over a certain piece of evidence), thus leaving as much as possible the "black box" issue at the doors of the proceedings and mitigating the risk of uncontrolled expansion of the reasons for vacating arbitral awards.

Footnotes

1. C. Panigutti, R. Hamon, I. Hupont, D. Fernandez Llorca, D. Fano Yela, D. JUnklewitz, S. Scalzo, G. Mazzini, I. Sanchez, J.- Soler Garrido, E. Gomez, The role of explainable AI in the context of the AI Act, FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, June 2023 p. 2: "the term black-box AI models usually refers to models that are, from a technical point of view, too big or too complex to be human understandable or, more generally, to systems whose internal decision-making processes are opaque. Often the opacity of models is used as a technical term to describe the degree of their black-boxiness".

2. The Chinese State Council first established the "Next Generation Artificial Intelligence Development Plan" in 2017. In 2021, ethical guidelines for dealing with AI were published. Then, in January 2022, China published two laws relating to specific AI applications. While the provisions on the management of algorithmic recommendations of internet information services (Algorithm Provisions) have been in force since March 2023, the provisions on the management of deep synthesis of internet information services (Draft Deep Synthesis Provisions) are still at the draft stage. Last November, the USA government published the Blueprint for an AI Bill of Rights, which should constitute a proposal of basic principles on which the negotiation on an actual hard law instrument on the matter should take place: https://www.whitehouse.gov/ostp/ai-bill-of-rights/.

3. EU Commission White Paper on Artificial Intelligence - A European approach to excellence and trust, pp. 11-12: https://commission.europa.eu/system/files/2020-02/commission-white-paper-artificial-intelligence-feb2020_en.pdf

4. The current version of the AI under discussion in constituted by the text lastly amended by the European Parliament on 13 June 2023: Artificial Intelligence Act Amendments adopted by the European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD))1, https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.pdf.

5. Ibid., art. 13-14.

6. New York Convention on the Recognition and Enforcement of Arbitral Awards, United Nations, 1958, available at: https://uncitral.un.org/sites/uncitral.un.org/files/media-documents/uncitral/en/new-york-convention-e.pdf

7. G.H. Kasap., Can Artificial Intelligence ("AI") Replace Human Arbitrators? Technological Concerns and Legal Implications, in Journal of Dispute Resolution, Issue 2, Article 5, 2021.

8. A 2021 report of the American Association for Justice highlights how the arbitrators' population is vastly male (77%) and white (88%), thus leading to inevitable biases in the composition of panels: https://www.justice.org/resources/research/forced-arbitration-hurts-women-and-minorities

9. ICC Arbitration Rules, art. 32; LCIA Arbitration Rules, art. 26.8; UNCITRAL Arbitration Rules, art. 34.3; SIAC Rules, art. 5.2 e).

10. ICSID Arbitration Rules, art. 50 c), clearly mentions that an award can be annulled when it fails to state the reasons on which it is based.

11. This assessment should include at least the following elements: (1) a clear outline of the intended purpose for which the system will be used; (2) a clear outline of the intended geographic and temporal scope of the system's use; (3) categories of natural persons and groups likely to be affected by the use of the system; (4) verification that the use of the system is compliant with relevant Union and national laws on fundamental rights; (5) the reasonably foreseeable impact on fundamental rights of using the high-risk AI system; (6) specific risks of harm likely to impact marginalized persons or vulnerable groups: (7) the reasonably foreseeable adverse impact of the use of the system on the environment; (8) a detailed plan as to how the harms and the negative impact on fundamental rights identified will be mitigated; and (9) the governance system the deployer will put in place, including human oversight, complaint-handling and redress.

12. Supra 4, art. 32 (a).

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More