ARTICLE
3 December 2024

The Speculations And Realities Of AI In Arbitration: A Fine Line Comment

KC
Khaitan & Co LLP

Contributor

  • A leading full-service law firm with over 560 professionals with Pan-India coverage through offices in Mumbai, Delhi, Bengaluru and Kolkata
  • Lawyers and trusted advisors to leading business houses, multinational corporations, global investors, financial institutions, governments and international law firms
  • Responsive and relationship driven approach to client service on critical issues and along the business life cycle
  • Specialists with deep sector, domain and jurisdictional knowledge to provide effective business solutions
At a recent conference of the Singapore International Arbitration Centre (SIAC) held in Mumbai, the Chief Justice of Singapore, Sundaresh Menon...
India Litigation, Mediation & Arbitration
  1. INTRODUCTION

At a recent conference of the Singapore International Arbitration Centre (SIAC) held in Mumbai, the Chief Justice of Singapore, Sundaresh Menon, underscored the potential of Artificial Intelligence ("AI") to resolve significant arbitration concerns, especially regarding cost and complexity.1 Justice Menon highlighted the role of generative AI as a facilitator of efficiency while lowering costs. He identified two major uses of AI – firstly, to improve traditional legal tasks such as research and drafting, and secondly, to assist arbitrators in managing increasing volumes and complexity of information.

There is not a universally endorsed definition of AI but many have endeavoured to define it. Notably, in a classic definition, John McCarthy2 referred to AI as "the science of making smart machines". Ryan Calo3, on the other hand, referred to AI as "a machine that mimics human and animal cognition."

There is no disputing that AI has affected daily life and particularly so following the COVID-19 pandemic that sent the world into a virtual lockdown age. The pandemic acted as a catalyst ushering in business, legal proceedings, and education to a virtual platform. The rapid increase in technology has not been limited to daily life but was also taken up, in reflection to the doubled confinement seasoned scholars of arbitration who tampered with AI as a component of arbitration. Some welcomed the inclusion of AI as an effective and innovative insight, while others had more conservative attitudes. This paper explores the speculations and realities of the integration of AI into arbitration.

  1. THE DATA CONUNDRUM: CONFIDENTIALITY OF ARBITRATION AND LIMITED RESOURCE

The effectiveness of an AI arbitrator will largely depend on the amount of data it processes. To generate reliable predictions, an AI arbitrator needs to analyse a substantial amount of data to establish generalisable rules for new scenarios.4 This is known as machine learning. However, obtaining such large data sets is problematic in the context of arbitration. Unlike other fields, arbitration, both domestic and international, often suffers from limited data availability. Most arbitral Awards are not published for reasons of confidentiality of the arbitral process, and those that are available tend to be heavily redacted. Even if arbitral Awards were more accessible, the volume of such Awards is relatively low. In the institutional context, leading arbitral institutions handle limited cases annually which would suggest proportionate numbers of Awards issued.5 For AI to be effective, it typically requires more extensive datasets to produce reliable results.

  • THE AI ARBITRATOR: THE LEGAL TENABILITY OF APPOINTING AI-ARBITRATORS

The question of whether AI can serve as an arbitrator under the Indian arbitration framework is a rapidly growing subject. While there are no explicit provisions in Indian law that either permit or disqualify AI from acting as an arbitrator, a close reading of Section 11 of the Arbitration and Conciliation Act, 1996 (thereafter referred to as "Arbitration Act") leaves room for the possibility. Section 11(1) allows "a person of any nationality" to be an arbitrator unless parties agree otherwise, while Section 11(2) grants parties the freedom to agree on a procedure for appointing arbitrators. Neither provision expressly limits arbitrators to natural persons which leaves room for interpretation.

International arbitration frameworks have generally leaned toward natural persons as arbitrators. For instance, Article 1450 of the French Code of Civil Procedure explicitly defines an arbitrator as a natural person, and Article 3 of the Scottish Arbitration Rules follows suit. Similarly, the UNCITRAL Model Law and ICC Rules also impliedly favour natural persons as nationality and legal capacity are key criteria for arbitrators, which AI, as of now, lacks.

Further, while traditional legal interpretations favour human arbitrators, there is no explicit prohibition against AI-based arbitrators in Indian law or international conventions like the New York Convention.

A key term here is "nationality" as it may extend beyond natural persons to include artificial entities. In State Trading Corporation v. Commercial Tax Officer6, the Supreme Court of India distinguished between "nationality" and "citizenship." The Supreme Court held that "nationality" refers to the legal relationship between a state and a person, whether natural or artificial whereas "citizenship" applies only to natural persons. This broader interpretation of nationality, therefore, suggests that the term could encompass legal entities, including corporations, under international law. However, this still leaves open the question of whether AI, as a non-human entity, could be similarly recognised.

Section 2(42) of the General Clauses Act, 1897, defines a "person" in broad terms, including legal entities beyond natural persons. While this might appear to widen the scope of who can serve as an arbitrator, AI does not yet fit into the established categories of artificial persons, such as companies, that enjoy legal recognition. AI lacks the established legal standing required to possess rights or duties.

Therefore, Section 11(2) offers flexibility as parties are free to establish the procedure for arbitrator appointments in their agreement. In this particular context, if the parties agree to appoint an AI arbitrator, it is arguable that such an appointment could be valid under the Indian framework. This does not automatically extend to AI algorithms as AI lacks legal personhood.

  1. SIGNATURE REQUIREMENT: LACK OF UNIFORMITY

AI's inability to provide a legally recognised attestation under Arbitration Act raises critical concerns about its viability as an arbitrator. Under Section 31(1) of the Arbitration Act, arbitral Awards must be made in writing and signed by the members of the Arbitral Tribunal. This signature requirement ensures the authenticity and enforceability of the Award. Similarly, the New York Convention mandates that an arbitral Award must be duly authenticated or certified to be recognised and enforced by Courts, a provision mirrored in Article 31 of the UNCITRAL Model Law.

However, AI's lack of legal personhood complicates this requirement. AI, being a non-human entity, cannot physically sign or attest an arbitral Award. This creates a fundamental legal gap, as attestation is a crucial element for the validity and enforceability of an Award. Without a signature, an AI-rendered Award could be challenged on the grounds that it fails to meet the necessary formalities for recognition.7 We have previously argued that cryptographic signatures used in blockchain transactions cannot be equated to physical signatures as they do not fulfil the same purpose of rightful authentication that physical signatures provide.8

Additionally, signature requirements in the context of e-contracts and smart contracts, has been a subject of deliberation for some time now. In the United States for instance, the Uniform Electronic Transactions Act, 1999, imposed regulations on electronic contracts, records and signatures, while specifying that electronic contracts would be valid, and the use of electronic signatures was a valid way of providing contractual consent. Interestingly, as has been argued by us earlier, smart contracts in the context of blockchain transactions are "formulated in a self-automated and coded format" wherein the requirement of digital signatures is not mandated for the enforcement of such contracts.9 In that sense, the requirement for signatures becomes circumstantial as e-contract, legislatively, require signatures. However, online dispute resolution platforms, such as Kleros for instance, function on self-automated smart contracts where digital signatures are dispensable – this dispensability is pervasive to the Awards rendered therein as well. In fact, blockchain arbitration jurors cannot sign the award due to their anonymity, the parties cannot be served signed copies of the awards.10

While some argue that pairing AI with human Arbitrators could address the attestation issue, this approach introduces further complications.11 If an arbitral panel consists of human and AI Arbitrators, questions arise regarding the resolution of conflicting opinions and how the AI's decision-making process is to be explained or authenticated. Further, the use of distinctive marks or stamps for AI-generated Awards might offer a solution but this remains a speculative fix rather than a legally recognised method.12

  1. PUBLIC POLICY REQUIREMENT

(i) Precedential developments and position of law

Sections 34 and 48 of the Arbitration Act address the processes for challenging arbitral Awards and enforcing foreign Awards, respectively. Both sections include public policy as one of the grounds for contesting either the validity or enforcement of an Award. In Renusagar Power Company Ltd. v. General Electric Company13, the Supreme Court held that the term "public policy" in Section 7(1)(b)(ii) of the Foreign Awards (Recognition and Enforcement) Act, 1961, should be understood in a narrow sense. The Supreme Court held that the enforcement of foreign Awards could be refused on public policy grounds only if they contravened (1) the fundamental policy of Indian law, (2) the interests of India, and (3) principles of justice or morality.

This narrow interpretation was later challenged in ONGC Ltd. v. SAW Pipes Ltd.14 ("ONGC"), where the Supreme Court expanded the meaning of "public policy" under Section 34 of the Arbitration Act. The Supreme Court held that public policy includes matters of public good and interest. Thus, an Award that blatantly violates statutory provisions could be deemed contrary to public interest and, hence, public policy.

The Supreme Court's approach shifted again in Phulchand Exports Ltd. v. OOO Patriot15, where it was held that the broader test established in ONGC should apply to foreign Awards.

However, this was subsequently reversed in Shri Lal Mahal Ltd. v. Progetto Grano Spa16, where the Court reinstated the more restrictive criteria from the Renusagar case. The Supreme Court held that the "public policy" ground should not be used to review the merits of the Award but should be confined to violations of fundamental principles.

The Arbitration and Conciliation (Amendment) Act, 2015 ("Amendment Act), introduced changes to Section 34 of the Arbitration Act. The Amendment Act aimed to restrict judicial interference in arbitral Awards and clarified that a review of the merits of the dispute is not permissible under the guise of public policy. Explanation 2 added to Section 34(2) stated that the test for contravention of the fundamental policy of Indian law should not involve a review of the merits of the dispute.

In Associate Builders v. Delhi Development Authority17, the Supreme Court held that for an Award to be set aside on grounds of public policy, it must shock the conscience of the Court or contravene the mores of society to such an extent that it undermines fundamental justice and morality.

(ii) Accommodating the AI Arbitrator under the realm of public policy

AI's application in decision-making is still nascent, and its integration into arbitration presents challenges concerning public policy. One major issue is the absence of human qualities in AI, such as emotions and empathy, which play a role in legal decision-making. Further, the "black box" nature of AI, where its decision-making process is not fully transparent could undermine the principles of due process.18 While the decision-making process of AI is opaque, Section 31(3) of A&C Act mandates the Arbitrator in the arbitral Award to state the reasons upon which it is based.

There are several potential methods by which AI could 'reason': through arithmetic asset division, decision trees based on input rules, case-based reasoning similar to common law precedent, or extensive data mining. For AI to be effective in arbitration, programmers would need to select the most suitable reasoning method or a combination thereof. Parties involved in arbitration would likely want to know which reasoning method an AI will employ in their case. This selection process would probably involve surveys, studies, and recommendations from legal counsel.

At present, the development stage of AI appears incompatible with the needs of arbitration due to its lack of transparency in reasoning. Even when parties agree to arbitration ex aequo et bono, unreasoned arbitral Awards are exceedingly rare. Parties could choose a simpler and cheaper method, like a coin flip but they value the reasoning process as it imparts a sense of fairness.19 Therefore, this political aspect of choosing one dispute resolution mechanism over other endures over time.

  1. BIAS: A LEGAL AND PRACTICAL INSEPARABILITY

AI systems may suffer from data biases as algorithms are trained on data that may inherently contain discriminatory elements related to race, gender, or other factors. AI has faced significant criticism for perpetuating and amplifying biases present in its training data. Instances such as Google's photo tagging issue20, biased predictive policing software21, and discriminatory job advertising on LinkedIn22 illustrate how AI systems can reflect and exacerbate societal biases. These problems arise from the fact that machine learning algorithms learn from historical data which may contain inherent biases.

If the training data itself is biased, the AI algorithm will encode and perpetuate these biases. A notable example is Amazon's recruiting algorithm, which showed gender bias because it learned that male candidates were preferred based on the predominance of male resumes. This bias reflected the broader gender imbalance in the tech industry.

AI arbitrators might inherit biases from past arbitration Awards. For instance, if historical Awards consistently favour companies over consumers, an AI system could replicate this bias, disadvantaging consumers.

Efforts to address AI bias include initiatives like Diversity.ai and Open.ai which aim to incorporate diversity checks into AI systems.23 These programs seek to identify and correct biases by applying large datasets and testing algorithms for fairness. However, these diversity-checking programs are still in their early stages and their effectiveness in eliminating biases remains uncertain. It is, therefore, not surprising that even those developing AI struggle to explain its workings and the reasons behind certain outputs.24 We may need to accept that we cannot effectively control or manage something we do not fully understand.

Due Process

A related enquiry that emerges is: Can AI uphold the equal right to be heard and due process? Section 18 of the Arbitration Act mandates that the parties shall be treated with equality and each party shall be given a full opportunity to present its case. The AI Arbitrator could potentially be designed to allocate equal time to both parties during hearings, limit submissions to a pre-agreed number of pages, notify parties of their chance to respond to evidence and submissions, and monitor delays in the procedural schedule. However, can an AI effectively handle 'guerrilla' tactics that may arise?25 The adaptability that human Arbitrators bring to arbitration—a key reason parties opt for this method—might be diminished. If an AI Arbitrator lacks the flexibility to adjust procedures as needed, the anticipated cost savings from using an AI arbitrator might not materialise.

When faced with a novel dispute lacking meaningful precedent and limited evidence, the challenge for a Judge or Arbitrator is to reach a fair decision despite the constraints. In such scenarios, the limited data available can restrict the accuracy of AI's predictions and current AI systems struggle to perform the role of a Judge or Arbitrator effectively. From a realpolitik perspective, as Thrasymachus suggests in Plato's Republic, "justice is the advantage of the stronger." In such a framework, if parties use AI to assist in formulating their cases, the better-trained and more sophisticated AI may give an advantage to the wealthier party.26

  • CONCLUSION

AI has evolved to the point where machines can analyse historical cases to predict dispute outcomes, and its use in this area is growing. Several challenges limit the adoption of AI in arbitration. These include the scarcity of available arbitral data, the technical constraints of AI, and its lack of emotional intelligence. Arbitration deals with the nuances of parties' motives, struggles, and expectations—elements that require a high level of emotional insight that AI currently lacks.

The newly introduced Silicon Valley Arbitration and Mediation Centre's (SVAMC) Guidelines on the Use of Artificial Intelligence in Arbitration include a directive in Guideline 6:

"An arbitrator shall not delegate any part of their personal mandate to any AI tool. This principle shall particularly apply to the arbitrator's decision-making process. The use of AI tools by arbitrators shall not replace their independent analysis of the facts, the law, and the evidence."

Therefore, the guideline itself states while arbitrators may consult AI tools, they must not rely on AI to substitute their personal judgment, discretion, responsibility, and accountability.

In the South Asian context, the use of AI in law enforcement is not unheard. For instance, the Road Transport Department of Malaysia has deployed an AI violations detection camera system, installed at selected traffic lights as a pilot project to detect motorists' behaviour.27 While currently this system has been adopted for purposes of policy making, it is expected to develop and expand into the realm of imposing traffic fines and rendering consequences from related violations. In effect, the South Asian vision for AI-based systems have even envisaged penal consequences.

The current arbitration framework in India for arbitration is designed with human decision-makers in mind. Implementing AI arbitrators without a robust legal framework for their development, design, and application could damage arbitration's reputation and undermine its effectiveness as a dispute resolution method. Therefore, we propose that India should introduce regulations that mandate testing AI systems for biases to ensure fairness and impartiality in arbitration outcomes. This approach should incorporate provisions similar to the AAA Guidelines for the Use of Artificial Intelligence in Arbitration (2024) and the SIAC AI Guidelines, both of which emphasise the need to identify and mitigate algorithmic biases to uphold justice and equality in legal proceedings.

To ensure transparency and explainability, India should adopt regulations mirroring the EU AI Act and AAA Guidelines. These regulations would require clear disclosure of AI's role in arbitration and mandate that the AI's decision-making process be understandable to all parties involved.

India should also stress the importance of human oversight in AI-assisted arbitration by adopting regulations similar to those found in the AAA Guidelines and the SIAC AI Guidelines. This would ensure that human arbitrators retain ultimate authority over AI-generated decisions.

Footnotes

1. Sahyaja MS, 'AI Can Make Arbitration Efficient and Cost-Effective: Singapore Chief Justice Sundaresh Menon' (Bar & Bench) https://www.barandbench.com/news/ai-can-make-arbitration-efficient-and-cost-effective-singapore-chief-justice-sundaresh-menon accessed 17 September 2024

2. John McCarthy, What Is Artificial Intelligence?, 2007

3. Calo, Ryan, Artificial Intelligence Policy: A Primer and Roadmap (August 8, 2017). Available at SSRN: https://ssrn.com/abstract=3015350 or http://dx.doi.org/10.2139/ssrn.3015350

4. Karl Manheim & Lyric Kaplan, Artificial Intelligence: Risks to Privacy and Democracy, 21 YALE J. L. & TECH. 106, 122 (2019)

5. Gizem Halis Kasap, Can Artificial Intelligence ("AI") Replace Human Arbitrators? Technological Concerns and Legal Implications, 2021 J. Disp. Resol. (2021)

6. 1963 AIR 1811

7. Onyefulu, Annabelle O. 'Artificial Intelligence in International Arbitration: A Step Too Far?'. Arbitration:

The Int'l J. of Arb., Med. & Dispute Mgmt 89, no. 1 (2023): 56-77.

8. Aseem Chaturvedi & Arpit Kumar Singh, Advent of Blockchain Arbitration in the Current Arbitration Ecosystem, Indian Business Law Review Volume 2 Issue 1

9. Aseem Chaturvedi and Trishala Trivedi, 'The Blockchain Arbitral Order: An Indian Perspective', Mondaq The Blockchain Arbitral Order: An Indian Perspective - Arbitration & Dispute Resolution - Litigation, Mediation & Arbitration - India (mondaq.com) last accessed on 25 September 2024.

10. Raghav Saha and Harshit Upadhyay, 'Blockchain Arbitration in India: Adopting the Hybrid Model Envisaged by Mexican 'Kleros' Case', IndiaCorpLaw Blockchain Arbitration in India: Adopting the Hybrid Model Envisaged by Mexican 'Kleros' Case - IndiaCorpLaw last accessed on 25 September 2024.

11. Cristina Ioana Florescu, 'The Interaction Between AI (Artificial Intelligence) and IA (International Arbitration): Technology as the New Partner of Arbitration

12. Ng (Huang Ying) & Benedetti del Rio, When the Tribunal Is an Algorithm: Complexities of Enforcing Orders Determined by a Software Under the New York Convention, 121 Kluwer 34 (2019).

13. 1994 AIR 860

14. (2003) 5 SCC 705

15. (2011) 10 SCC 300

16. (2014) 2 SCC 433

17. (2015) 3 SCC 49

18. 'Arbitration Tech Toolbox: Looking beyond the Black Box of AI in Disputes over AI's Use - Kluwer Arbitration Blog' (Kluwer Arbitration Blog) https://arbitrationblog.kluwerarbitration.com/2023/05/25/arbitration-tech-toolbox-looking-beyond-the-black-box-of-ai-in-disputes-over-ais-use/ accessed 17 September 2024

19. Supra 7.

20. News B, 'Google Apologises for Photos App's Racist Blunder' (BBC News) https://www.bbc.com/news/technology-33347866 accessed 17 September 2024

21. Will Douglas Heaven, 'Predictive Policing Algorithms Are Racist. They Need to Be Dismantled.' (MIT Technology Review) https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/ accessed 17 September 2024 ‌

22. IANS, 'LinkedIn Forced to Change Global Policy over Job Posting Removal' (30 March 2022) https://www.business-standard.com/article/companies/linkedin-forced-to-change-global-policy-over-job-posting-removal-122033000518_1.html accessed 17 September 2024

23. Zowghi, D., Bano, M. AI for all: Diversity and Inclusion in AI. AI Ethics (2024) AI and Ethics

24. 'How Does AI Make Decisions We Don't Understand? Why Is It a Problem? | Built In' https://builtin.com/artificial-intelligence/ai-right-explanation accessed 17 September 2024

25. Supra 5

26. 'Arbitration Tech Toolbox: Let's Chat Some More about ChatGPT and Dispute Resolution - Kluwer Arbitration Blog' (Kluwer Arbitration Blog) https://arbitrationblog.kluwerarbitration.com/2023/04/08/arbitration-tech-toolbox-lets-chat-some-more-about-chatgpt-and-dispute-resolution/ accessed 17 September 2024 ‌

27. AI eye on errant drivers at traffic lights | The Star

The content of this document does not necessarily reflect the views / position of Khaitan & Co but remain solely those of the author(s). For any further queries or follow up, please contact Khaitan & Co at editors@khaitanco.com.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More