Over the next decade, tribunal decision-makers will lean on generative A.I. and large language models ("LLMs") to ease their administrative burdens.
LLMs such as ChatGPT or BERT have the potential to alleviate overwhelming case loads and focus adjudicators on the substantive merits of the decision before them.
But reliance on A.I. poses significant challenges – just who is making the decision where LLMs or A.I. are involved? Do persons affected by decisions made by A.I. have an expectation that there is a "human in the loop"? And how do adjudicators and those affected by them deal with the inherent risks of LLMs, including so-called "hallucinations"1 and bias?
The current model of administrative law is ill-equipped to deal with these concerns.2
Based on a 20th-century understanding of how decision-makers arrive at their conclusions, the substantive and procedural scrutiny undertaken by Canadian judges in the judicial review process makes a basic, but now flawed assumption: that only human beings are involved in the adjudicative process.
As lawyers adapt to the increasing use of LLMs or A.I. by adjudicators, they are (or will be) developing new ways to challenge both the substantive merits and procedural fairness aspects of tribunal decisions.
Below is a list of potential ways parties can challenge decisions made with the assistance of, or entirely with, generative A.I.
1. Demands for Production
LLMs and A.I. depend largely on data sets scraped or culled from the Internet. Thus, it is not always apparent what underlying material A.I. relied on to reach a particular conclusion.
A tribunal or decision-maker usually has a duty to produce the underlying record that led to the decision in question.3
That obligation may now entail producing the information and inputs provided to the A.I. software that assisted with the decision being challenged.
So far, this argument has been unsuccessful in the Courts, but it is only a matter of time before an enterprising lawyer receives production of the data analyzed by an A.I. algorithm.
The case of Haghshenas v. Canada (Minister of Citizenship and Immigration), 2023 FC 464, illustrates the point.
Haghshenas involved the judicial review of a decision by an immigration officer at the Canadian Embassy in Turkey, who rejected the applicant's application for a work permit in Canada on the basis that he was not satisfied that the applicant would ultimately leave Canada at the end of his stay.
The immigration officer's decision was made in part using the federal government's "Chinook" software, which the applicant argued was a form of A.I. developed by Microsoft.
The applicant made a general request of the Court that, as a matter of procedural fairness, he was entitled to understand how Chinook was employed in his decision.
The Court rejected this broad request for production:
Regarding the use of the "Chinook" software, the Applicant suggests that there are questions about its reliability and efficacy. In this way, the Applicant suggests that a decision rendered using Chinook cannot be termed reasonable until it is elaborated to all stakeholders how machine learning has replaced human input and how it affects application outcomes. I have already dealt with this argument under procedural fairness, and found the use of [A.I.] is irrelevant given that (a) an [immigration] Officer made the Decision in question, and that (b) judicial review deals with the procedural fairness and or reasonableness of the Decision...[emphasis added]
2. Lack of Transparency and Bias
In its leading 1999 decision, Baker v. Canada (Minister of Citizenship and Immigration), [1999] 2 S.C.R. 817, the Supreme Court of Canada established that all administrative decisions must not be tainted by a reasonable apprehension of bias.
In Baker, an immigration officer's notes led the Court to conclude that his decision to order the applicant deported from Canada for overstaying a visitor's visa gave rise to a reasonable apprehension of bias. The officer's notes provided, among other things, that the case was a "catastrophe" and Canada could "no longer afford this type of generosity".
The officer's bias in Baker was apparent on its face – the notes disclosed his line of reasoning, which was fraught with racial and mental-health animus.
By contrast, algorithmic reasoning, as many A.I. scholars have noted,4 can be opaque.
More disturbing, A.I. relies largely on data culled from the Internet, which itself has been proven to amplify bias against racial and other historically-disadvantaged groups.
Decisions made with the assistance of A.I. or LLMs could very well be compromised by bias – the challenge for Canadian administrative law now is how to detect a reasonable apprehension of bias on judicial review.
3. A Legitimate Expectation in Human Decision-Making?
The doctrine of legitimate expectations has received short thrift in Canadian administrative law.
As a matter of procedural fairness, an applicant may have a legitimate expectation that a certain procedure will be followed by a decision-maker.
Moreover, a claimant may also have a legitimate expectation that, if a certain result will be achieved, they will be afforded a higher degree of procedural fairness.
Overall, the doctrine protects procedural, not substantive rights.5
Where a decision-maker employs A.I., the affected party may very well have a "legitimate expectation" that the decision has been subject to human oversight and analysis. In other words, "a human in the loop" could now constitute an aspect of a fair hearing.
This argument has yet to be tested in the case law, but legitimate expectations could evolve as a more important principle where judges scrutinize administrative decisions made with the help of A.I.
4. The Right to Be Heard
Every party affected by a decision should be given a "fair opportunity" to answer the allegations made against them.6
Known as the audi alteram partem rule, the principle protects a claimant's right to make submissions so that State decisions are not made in a vacuum.
A.I. poses a potential threat to the right to be heard – does the right to be heard encompass the right to a hearing before human peers? To what extent can machine learning inform an adjudicator's hearing of a case? When does the use of A.I. by a decision-maker become unfair because it inhibits human adjudication? Is it unfair, for example, for applicants to make their case to an A.I. bot, at least at first instance?
The Way Forward
There is no question that machine learning will transform our 20th-century model of administrative law.
Of necessity, judicial scrutiny of tribunal and statutorily-prescribed decisions will evolve.
Judges and lawyers will have to become familiar with how machine learning influences outputs and the extent to which A.I. has played a role in any particular decision.
The doctrines of administrative law offer the toolkit to analyze whether a State decision is substantively reasonable or unfair: A.I. will transform how those tools are employed.
Footnotes
1 See Ko v. Li, 2025 ONSC 2766 (Sup. Ct.); 2025 ONSC 2965.
2 For a further exploration of this thesis, see generally, Marco P. Falco, "How A.I. Will Revolutionize Our 20th Century Understanding of Administrative Law" (2025), 318 Adv. Q. Vol. 55.
3 See, for example, Judicial Review Procedure Act, R.S.O. 1990, c.J.1 at s.10.
4 See Lorne Sossin, Robert W. Maccaulay & James Sprague, "A.I. and the Duty of Fairness", Practice and Procedure Before Administrative Tribunals (Toronto: Thomson Reuters Canada, 2024) at 27.10.
5 Baker, supra at para. 26.
6 See United Food and Commercial Workers, Local 175 v. La Rocca Creative Cakes, 2024 ONSC 2243 at para. 64.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.