As administrative burdens swell, the temptation for decision-makers like judges and tribunals to use Artificial Intelligence ("AI") and large language models ("LLM") to reach a decision will increase significantly.
Applicants who seek to challenge the decisions made with non-human input, by way of an appeal or application for judicial review, face new challenges in persuading a Court that a decision was procedurally unfair or substantively wrong because the adjudicator relied on AI.
Recent decisions of Canadian Courts illustrate how difficult these burdens on applicants may be.
Proving that a judge or tribunal improperly delegated their authority to a machine will involve a complex analysis of how the AI or LLM was used, what data inputs educated the AI tool in question, and how influential AI actually was to the ultimate finding.
While the use of AI is still in its infancy, Courts are already suggesting the evidentiary onuses on applicants will be high.
Two recent decisions of the Federal Court illustrate the point.
1. Speculative Concerns About the Use of AI Not Enough
In Espinosa Cotacachi v. Canada (Minister of Citizenship and Immigration), 2024 FC 2081, the applicant brought an application for judicial review of a decision of visa officer (the "Officer") refusing her a Canadian open work permit.
The Officer held that the applicant would likely not leave Canada at the end of her stay, as required.
In reaching his decision, the Officer relied on the Government of Canada's Chinook Software ("Chinook"), a program designed by Microsoft to assist immigration application administrators abroad. The Government of Canada's website denies that Chinook adopts or constitutes a form of AI.
The applicant challenged the Officer's decision on the basis that it violated procedural fairness.
She argued that because the Officer's reasons for refusing the applicant and all her family members were similar, this led to the conclusion that the Officer did not consider the applications and simply relied on Chinook and AI to arrive at a conclusion.
In the applicant's view, the decision in question was made too quickly, leading to the conclusion that the Officer improperly delegated his decision-making authority to AI.
The applicant went one step further, also arguing that the Officer's use of Chinook was entirely opaque, and therefore inherently unfair.
The Court rejected the argument.
The use of Chinook, in and of itself, did not suggest "any breach of procedural fairness".
First, the decisions involving the applicant and her family were largely the same because they were based on the same supporting documents. In fact, "it would be more problematic if the reasons [for each family member] differed".
Second, the applicant's concerns about the use of Chinook were based entirely on speculation and not "clear evidence".
The applicant's conjecture as to how Chinook operates was based on a complete lack of evidentiary record that Chinook replaced the Officer's role or resulted in the Officer reaching an unfair decision.
The Court relied on a number of previous decisions of the Federal Court, which held that the use of Chinook, without more, is not presumptively problematic: see Jamali v. Canada, 2023 FC 1328; Ardestani v. Canada, 2023 FC 874; Haghsehnas v. Canada, 2023 FC 464; Raja v. Canada, 2023 FC 719.
Accordingly, even if the Officer employed Chinook, the Officer's reasons for decision showed his clear application of the law to the facts of the case.
Espinoza Cotacahi illustrates that speculation and conjecture about how AI is employed in the decision-making process will not be enough to render a decision procedurally unfair or substantively unreasonable.
2. Is a New "Clear Evidence" Standard of Proof About the Use of AI Developing?
Another decision of the Federal Court involving Chinook, Pjetracaj v. Canada (Minister of Citizenship and Immigration), 2025 FC 103, suggests that a new evidentiary standard may be developing in case law about when a decision will be considered unfair by employing AI.
Pjetracaj involved an application for judicial review of a decision of an immigration officer (the "Officer") who refused the applicant's temporary resident visa on the basis that the Officer was not satisfied that the applicant would depart Canada at the end of his authorized stay.
As in Espinoza Cotacahi, the applicant in Pjetracaj challenged the decision in part on the Officer's reliance on the Chinook tool.
He argued that the Officer inappropriately delegated his decision-making power to Chinook and that the mere use of Chinook displaced the presumption that the Officer considered any evidence in the applicant's case.
Once again, the Court rejected the applicant's position.
There was no evidence that the Officer used Chinook in this case.
In any event, the Court held that "clear evidence" was required to show that the use of Chinook gives rise to procedural unfairness or an unreasonable ruling:
... this Court has repeatedly determined that an officer's use of Chinook to process an application does not, in itself and without clear evidence, raise an issue of reasonableness or procedural fairness ...[emphasis added]
The "Clear Evidence" Onus
As Canadian case law regarding AI evolves, a few principles are becoming apparent:
- An adjudicator's employment of AI or an LLM, in and of itself, will not give rise to a breach of procedural unfairness or render a decision erroneous. This view may become more prevalent, as judges and administrative triers-of-fact seek to maximize scare adjudicative resources by relying on AI; and
- In order for the use of AI to be called into question, "clear evidence" is required. This imposes a significant evidentiary burden on the applicant challenging the decision. One can easily imagine a scenario where an applicant would have to retain an IT expert to testify as to the data sets and underlying information used to develop an AI tool.
There is no question that, in the years to come, challenges to decisions made with the use of AI will become commonplace.
As the evidentiary burden on applicants comes into focus, the scope of appeals and applications for judicial review may very well become more complex.
Applicants may want to call AI experts to testify and conduct examinations of the decision-maker, to determine how they arrived at their findings – all with a view to ensuring that a human decision-maker actually made the ruling in question.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.