Without question, artificial intelligence ("AI") will revolutionize the practice of civil litigation and administrative law in the next decade.

AI has the potential to promote access to justice, as well as efficiencies in the way disputes are resolved. That being said, without proper oversight, the use of AI in litigation is fraught with risk.

Litigators relying on AI, particularly for the more challenging tasks of written advocacy, could inadvertently mislead the Court by misrepresenting the state of the law or evidence in a case. The capacity for AI to experience "hallucinations" during the preparation of a case is real and the effects on the administration of justice could be catastrophic.

In this context, the need for guidelines in Ontario on the use of AI by lawyers, administrative decision-makers, and judges has never been more pressing.

A. The Delegation of Administrative Decision-Making

One of the central tenets of administrative law is that access to justice is improved through the State's delegation of decision-making authority to tribunals and others.

In practice, however, there is an incongruity between aspiration and achievement. Canadian tribunals and administrative decision-makers, like Courts, face heavy workloads and backlogs.

AI has the potential to alleviate the burden on the administrative state, by, among other things, assisting in the making of tribunal decisions.

But how far can a tribunal go in relying on AI to assume its adjudicative function?

A recent decision of the Federal Court, Haghshenas v. Canada (Minister of Citizenship and Immigration), 2023 FC 464, sheds light on how Canadian Courts may approach issues of fairness relating to the use of AI in administrative adjudication.

In Haghshenas, the Federal Court upheld a decision by a Canadian immigration officer denying the applicant a work permit. The decision, it appears, was made in part with the assistance of AI software, known as "Chinook", employed by the federal government. In assessing both the reasonableness and fairness of the officer's decision, the Court upheld the use of Chinook in this context:

... the Applicant submits that the Decision is based on artificial intelligence generated by Microsoft in the form of "Chinook" software. However, the evidence is that the Decision was made by a Visa Officer and not by software. I agree that the Decision had input assembled by artificial intelligence, but it seems to me the Court on judicial review is to look at the record and the Decision and determine its reasonableness ...

The Court further rejected the argument that the Decision was unfair because it is unknown how machine learning "has replaced human input and how it affects [immigration] application outcomes".

To date, the message appears to be that Courts will tolerate AI as an aid to administrative decision-making, so long as the ultimate decision is made by a human.

One can envision new circumstances, however, where the problematic aspects of machine learning, including biases and misinformation, could taint a tribunal's reasoning. The issues raised by Haghshenas are ripe for further judicial consideration.

B. The Potential to Impair the Proper Administrative of Justice

Perhaps one of the most sensational AI mishaps to date is the case of a litigator's reliance on ChatGPT in the decision of the United States District Court Southern District of New York, Mata v. Avianca.

In Mata, the Court imposed a joint penalty of $5,000 on both the respondents and their lawyers for making written submissions authored by ChatGPT, which included "non-existence judicial opinions with fake quotes and citations created by artificial intelligence". Respondents' counsel exacerbated the situation by standing "by the fake opinions after judicial orders called their existence into question".

Most notably, the Court identified a number of harms to the administration of justice arising from the lawyers' and Court's reliance on what amounted to fake legal decisions:

Many harms flow from the submission of fake opinions. The opposing party wastes time and money in exposing the deception. The Court's time is taken from other important endeavours. The client may be deprived of arguments based on authentic judicial precedents. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct. It promotes cynicism about the legal profession and the American judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.

While the Court was careful in Mata to address the lawyers' primary gatekeeping function when relying on machine learning for assistance, the perils of a lawyer's dependence on a language-processing tool like ChatGPT are so numerous that they may very well exceed a litigator's ability to control them.

And busy litigators who may rely on machine learning as an efficient "shortcut" do so at the risk of undermining their professional reputations and the proper administration of justice. The mere existence of artificially-produced "imposter" decisions threatens the integrity of judicial reasoning itself.

3. Manitoba and Yukon Courts' Position on AI in Litigation

Apart from expressing general concern about AI in jurisprudence, Canadian Courts are just beginning to establish guidelines and parameters about machine learning in civil proceedings.

In Manitoba, the Court of King's Bench has issued a Practice Direction, dated June 23, 2023, which addresses the questionable accuracy and reliability of artificial intelligence. The Direction mandates that where AI has been employed "in the preparation of materials filed with the court, the materials must indicate how artificial intelligence was used".

Three days later, on June 26, 2023, the Supreme Court of Yukon echoed similar sentiments in its own AI Practice Direction. The Court requires that any counsel or party that relies on AI, including ChatGPT or any other AI platform, for legal research or submissions "in any matter and in any form before the Court" must advise the Court "of the tool used and for what purpose".

While Practice Directions in no way represent a thorough tool to address the potential problems AI may cause, they are very much a start.

Requiring counsel to disclose that machine learning was used in the development of arguments, or perhaps even the construction of evidence, at the very least flags the inherent inaccuracies and biases that could potentially bear on machine-based advocacy.

The Way Forward

While the bulk of ink spilled on AI in civil proceedings to date has focused on whether AI platforms such as ChatGPT will replace litigators, that concern seems myopic.

As machine-learning becomes commonplace, its effect on the administration of justice presents a serious challenge to Courts and lawyers, alike. As set out above, the use of AI in administrative and civil proceedings raises a number of concerns including:

  • The accuracy and reliability of evidence, materials and submissions drafted, in part, with the assistance of a machine – one that relies heavily on the Internet as the source of its knowledge and has the capacity to experience "hallucinations" in its reasoning;
  • The potential for a lawyer or party to inadvertently mislead the Court or tribunal;
  • The biases inherent in AI and the effect this could have on decision-making and outcomes in the dispute resolution process; and
  • The cost of using AI on the training and development of a future generation of litigators.

None of these issues have simple answers and there is no one panacea to address them.

The need for guidance and instruction by the Courts, with the potential of regulating the use of AI through the implementation of rules and directions, has never been more pressing.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.