The proliferation of generative AI will have a profound effect on how tribunals and adjudicators make decisions in the decades to come. Large language models ("LLMs") like ChatGPT can promote efficiency and help alleviate the burden on overstrained State resources.

With reward, however, comes peril. Because LLMs are trained on data obtained from the Internet, they can reflect biases and stereotypes of historically-disadvantaged groups that can taint how an administrative ruling is made.

As tribunals and decision-makers navigate the unchartered waters of LLMs, one of the leading 1999 decisions of the Supreme Court of Canada on administrative bias, Baker v. Canada (Minister of Citizenship and Immigration), [1999] 2 S.C.R. 817, sheds light on the rules necessary to curb the worst impulses of biased decisions that could be amplified by AI.

The Pre-AI Days: How Bias Violates Procedural Fairness

Baker involved an application by Ms. Baker, a citizen of Jamaica, who had lived and worked in Canada as a domestic worker for 11 years and had four children who were all Canadian citizens. Ms. Baker suffered post-partem mental illness, though psychiatric evidence indicated her mental situation was improving.

Upon being ordered deported from Canada in December 1992, Ms. Baker applied within Canada for permanent residency under the then Immigration Act, R.S.C. 1985, c.I-2 (the "Act") on "humanitarian and compassionate grounds".

The immigration officer with carriage of Ms. Baker's file ruled that there were insufficient grounds to warrant processing the application for permanent residency within Canada (the "Decision"). While Ms. Baker was not originally provided with reasons for the Decision, they were ultimately disclosed in the Court application.

The reasons of the officer revealed significant reliance on racial and other stereotypes in the decision-making process:

PC is unemployed – on Welfare. No income shown – no assets ... HAS A TOTAL OF EIGHT CHILDREN.

...

... Letter of Aug. '93 from psychiatrist from Ont. Govm't says PC has post-partum psychosis in Jam. when was 25 yrs. old. Is now an out-patient and is doing relatively well – deportation would be an extremely stressful experience.

...

This case is a catastrophy [sic]. It is also an indictment of our "system" that the client came as a visitor in Aug. '81, was not ordered deported until Dec. '92 and in APRIL '94 IS STILL HERE!

... She has FOUR CHILDREN IN JAMAICA AND ANOTHER FOUR BORN HERE. She will, of course, be a tremendous strain on our social welfare systems for (probably) the rest of her life ... Do we let her stay because of that? I am of the opinion that Canada can no longer afford this type of generosity ...

The passage above is no doubt disturbing. And the level of bias is apparent.

The Court applied the following test, from Committee for Justice and Liberty v. National Energy Board, [1978] 1 S.C.R. 369, in holding that the officer's notes gave rise to a reasonable apprehension of bias and therefore constituted a breach of procedural fairness to Ms. Baker:

... The test is "what would an informed person, viewing the matter realistically and practically – and having thought the matter through – conclude. Would he [sic] think that it is more likely than not that [the decision-maker], whether consciously or unconsciously, would not decide fairly"

The majority of the Court noted that decisions affecting the immigration status of applicants require "special sensitivity", particularly because Canada is largely a nation "of people whose families migrated here in recent centuries". Immigration decisions "require a recognition of diversity, an understanding of others and an openness of difference".

In the Court's review, the immigration officer in this case displayed a closed mind, fraught with stereotypes, when adjudicating Ms. Baker's application.

The decision made an improper link between Ms. Baker's mental illness (despite evidence of improvement), her role as a domestic worker, and the fact that she had several children. The officer's own "frustration with the system" interfered with the officer's duty to consider Ms. Baker's application objectively and free from bias.

The Implications of Baker for AI Decision-Making

The lessons of Baker are plain: when the State delegates decision-making authority to non-judicial decision-makers, those rulings must be free from a reasonable apprehension of bias, stereotype and discrimination. In Baker, the bias was obvious in the officer's reasons.

The problem is that generative AI and LLMs are capable of echoing or amplifying intolerance and discrimination, given their reliance on Internet data as their source of knowledge. And that intolerance and discrimination may be clouded from view.

As the Canadian government's Guide on the Use of Generative AI makes clear, "[g]enerative AI tools can produce content that is discriminatory or not representative, or that includes biases or stereotypes (for example, biases relating to multiple and intersecting identity factors such as gender, race, and ethnicity). Many generative models are trained on large amounts of data from the Internet, which is often the source of these biases".

Accordingly, where AI is used to either facilitate or reach public law decisions affecting the rights and privileges of Canadians, the risk that such information is plagued by bias or discrimination is real.

Further, the underlying biases that generative AI may promote may not be obvious.

Whereas the immigration officer's notes in Baker expressly disclosed the racial and mental health stereotypes that animated the officer's decision, LLMs to date lack such transparency.

How AI reaches its conclusions is often cryptic, if not entirely unknown. Any judicial oversight of an administrative decision rendered exclusively by AI, in the vein of Baker, would be futile: evidence of a reasonable apprehension of AI bias may be non-existent.

The Way Forward

Baker provides an important framework from which we can now assess the bias inherent in the use of generative AI in administrative law:

  1. To avoid the sorts of Internet bias and stereotypes inherent in AI, tribunals must ensure that all decisions made under the governing statute are subject to human oversight. This approach, at a minimum, will capture obvious examples where a ruling has been made that is fraught with discriminatory or stereotypical bias;
  2. Tribunals and administrative decision-makers must work collaboratively with their IT departments and obtain legal advice to understand the data that trains generative AI models. To the extent that these tools can be tested for bias, they should be;
  3. The procedural rules that govern tribunals and adjudicators must incorporate rules on the use of AI in the decision-making process. An absence of regulation can lead to unintended consequences;
  4. Adjudicators and tribunal members should be trained in diversity and inclusion matters, ensuring they have capacity to identify gender, racial and other forms of institutional bias in their rulings; and
  5. Given strained Court resources, decision-makers cannot simply rely on judicial oversight as a way to curb the excesses of AI discrimination.

Overall, the lessons of Baker have profound resonance today.

If generative AI is to prove more help than hinderance in public law adjudication, then decision-makers must be aware of its inherent limitations, including the real possibility of biased rulings.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.