- within Litigation and Mediation & Arbitration topic(s)
- in United Kingdom
- within Litigation, Mediation & Arbitration, Environment and Coronavirus (COVID-19) topic(s)
The AAA-ICDR's recent launch of its 'AI-native arbitrator' for documents-only construction disputes is a testament to how artificial intelligence (AI) is impacting the arbitration landscape.
What is the tool?
The 'AI Arbitrator' – as the AAA-ICDR refers to it – aims to help a human arbitrator deliver faster, more cost-effective and transparent resolution of disputes. Initially available for two-party, document-only construction cases, the tool effectively seeks to enable arbitrators to use AI to accelerate the work they do in preparing an award. The AAA-ICDR says it has trained a model on over 1,500 real construction arbitration awards and refined outputs further using expert-labelled examples. The tool is being advertised as able to assess claims, generate recommendations, and produce first drafts of awards.
Importantly, the AAA-ICDR highlights that this is not a service intended to fully automate preparation of awards. The tool is said to operate within a "human-in-the-loop" framework which involves the following steps:
- Parties upload case materials and validate the AI's understanding of their submissions.
- The AI then applies "legal reasoning" to draft a proposed award.
- A trained human arbitrator reviews, refines and issues the final decision.
Risks and ramifications: an exciting innovation to be treated with caution
While the AAA-ICDR's desire to innovate is laudable, this new offering should be considered with caution. The tool's current scope is deliberately narrow, as it is not designed for complex, multi-party disputes or cases involving witness testimony. In a construction context, the AI Arbitrator's narrow remit positions the tool more as a potential competitor to interim forms of dispute resolution, such as adjudication or Dispute Adjudication Boards (DABs), where speed and cost-efficiency are paramount.
While the introduction of accelerated, albeit human-backed, decision-making is an exciting development, users need to remain conscious of the current capabilities and limitations of generative AI. Generative AI tools, even best-in-class ones, are not yet capable of autonomously performing nuanced legal analysis. The phenomenon of "hallucinations", outputs that are factually incorrect or fabricated, is not a malfunction but an inherent attribute of current AI models. This underscores the importance of preserving the integrity of the decision-making process, with arbitrators needing to apply independent legal reasoning to every case.
Regulatory and ethical guidance: Guardrails for responsible use
Recent guidance from US courts and arbitral institutions reflects a growing acceptance of AI in legal proceedings, but with clear boundaries. The AAA-ICDR's own March 2025 guidance cautions against over-reliance on AI and emphasises that arbitrators must:
- Critically evaluate AI outputs for accuracy and reliability.
- Maintain fairness and due process, ensuring AI enhances rather than compromises the arbitration process.
- Retain independent decision-making, using AI as a support tool, not as a substitute.
- Disclose the use of AI when it materially impacts the arbitration process or reasoning.
In July 2025, the AI Task Force of the Administrative Office of the United States Courts issued interim guidance to the US federal judiciary, which cautioned judges and their staff against "delegating core judicial functions to AI, including decision-making or case adjudication", and warned judges that they are "accountable for all work performed with the assistance of AI".
These admonitions apply with equal force to arbitrators, as is clear from the guidance now emanating from arbitral institutions, including the AAA-ICDR and the Silicon Valley Arbitration and Mediation Center. In parallel, the European Union's Artificial Intelligence Act classifies AI systems used in the administration of justice as "high-risk", subjecting them to stringent compliance requirements. These provisions would extend to automated decision-making in arbitration.
Comment
The AAA-ICDR's AI Arbitrator is an interesting innovation, tailored for disputes that are inherently document-heavy, particularly in construction, where timely resolution of disputes is essential. Its "human-in-the-loop" design is intended as a key safeguard, but its effectiveness depends on how actively arbitrators engage with and scrutinise the underlying case information, not just being led by the AI-generated outputs. Questions also remain as to how arbitrators can effectively review or police the output of a generative AI tool while retaining the time-saving advantages the tool is intended to deliver.
Generative AI still lacks legal reasoning as we know it and the contextual sensitivity required for complex disputes. Over-reliance could compromise procedural fairness and, in a worst-case scenario, potentially fall short of enforcement standards under instruments like the New York Convention.
Recent guidance from arbitral institutions, the US judiciary and the EU's AI Act reinforces the need for transparency, accountability, and human oversight. These frameworks make clear that while AI can enhance dispute resolution, it must operate within strict ethical and regulatory boundaries. The aim is not to replace arbitrators, but to responsibly augment their capabilities.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.