Teresa Scassa - Blog

Friday, 22 November 2024 08:50

Crafting a More Nuanced Right to an Explanation in Administrative Law: A comment on Ali v. Minister of Public Safety and Emergency Preparedness

Written by  Teresa Scassa
Rate this item
(0 votes)

A recent decision of the Federal Court of Canada (Ali v. Minister of Public Safety and Emergency Preparedness) highlights the role of judicial review in addressing automated decision-making. It also prompts reflection on the limits of emerging codified rights to an explanation.

In July 2024, Justice Battista overturned a decision of the Refugee Protection Division (RPD) which had vacated the refugee status of the applicant, Mr. Ali. The decision of the RPD was based largely on a photo comparison that the RPD to conclude that Mr. Ali was not a Somali refugee as he had claimed. Rather, they concluded that he was a Kenyan student who had entered Canada on a student visa in 2016, a few months prior to Mr. Ali’s refugee protection claim.

Throughout the proceedings the applicant had sought information about how photos of the Kenyan student had been found and matched with his own. He was concerned that facial recognition technology (FRT) – which has had notorious deficiencies when used to identify persons of colour – had been used. In response, the Minister denied the use of FRT, maintaining instead that the photographs had been found and analyzed through a ‘manual process’. A Canadian Border Services agent subsequently provided an affidavit to the effect that “a confidential manual investigative technique was used” (at para 15). The RPD was satisfied with this assurance. It considered that how the photographs had been gathered was irrelevant to their own capacity as a tribunal to decide based on the photographs before them. They concluded that Mr. Ali had misrepresented his identity.

On judicial review, Justice Battista found that the importance of the decision to Mr. Ali and the quasi-judicial nature of the proceedings meant that he was owed a high level of procedural fairness. Because a decision of the RPD cannot be appealed, and because the consequences of revocation of refugee status are very serious (including loss of permanent resident status and possible removal from the country), Justice Battista found that “it is difficult to find a process under [the Immigration and Refugee Protection Act] with a greater imbalance between severe consequences and limited recourse” (at para 23). He found that the RPD had breached Mr. Ali’s right to procedural fairness “when it denied his request for further information about the source and methodology used by the Minister in obtaining and comparing the photographs” (at para 28).

Justice Battista ruled that, given the potential consequences for the applicant, disclosure of the methods used to gather the evidence against him “had to be meaningful” (at para 33). He concluded that it was unfair for the RPD “to consider the photographic evidence probative enough for revoking the Applicant’s statuses and at the same time allow that evidence to be shielded from examination for reliability” (at para 37).

In addition to finding a breach of procedural fairness, Justice Battista also found that the RPD’s decision was unreasonable. He noted that there had been sufficiently credible evidence before the original RPD refugee determination panel to find that Mr. Ali was a Somali national entitled to refugee protection. None of this evidence had been assessed in the decision of the panel that vacated Mr. Ali’s refugee status. Justice Battista noted that “[t]he credibility of this evidence cannot co-exist with the validity of the RPD vacation panel’s decision” (at para 40). He also noted that the applicant had provided an affidavit describing differences between his photo and that of the Kenyan student; this evidence had not been considered in the RPD’s decision, contributing to its unreasonableness. The RPD also dismissed evidence from a Kenyan official that, based on biometric records analysis, there was no evidence that Mr. Ali was Kenyan. Justice Battista noted that this dismissal of the applicant’s evidence was in “stark contrast to its treatment of the Minister’s photographic evidence” (at para 44).

The Ali decision and the right to an explanation

Ali is interesting to consider in the context of the emerging right to an explanation of automated decision-making. Such a right is codified for the private sector context in the moribund Bill C-27, and Quebec has enacted a right to an explanation for both public and private sector contexts. Such rights would apply in cases where an automated decision system (ADS) has been used (and in the case of Quebec, the decision must be based “exclusively on an automated processing” of personal information. Yet in Ali there is no proof that the decision was made or assisted by an AI technology – in part because the Minister refused to explain their ‘confidential’ process. Further, the ultimate decision was made by humans. It is unclear how a codified right to an explanation would apply if the threshold for the exercise of the right is based on the obvious and/or exclusive use of an ADS.

It is also interesting to consider the outcome here in light of the federal Directive on Automated Decision Making (DADM). The DADM, which largely addresses the requirements for design and development of ADS in the federal public sector, incorporates principles of fairness. It applies to “any system, tool, or statistical model used to make an administrative decision or a related assessment about a client”. It defines an “automated decision system” as “[a]ny technology that either assists or replaces the judgment of human decision-makers […].” In theory, this would include the use of automated systems such as FRT that assist in human decision-making. Where and ADS is developed and used, the DADM imposes transparency obligations, which include an explanation in plain language of:

  • the role of the system in the decision-making process;
  • the training and client data, their source, and method of collection, as applicable;
  • the criteria used to evaluate client data and the operations applied to process it;
  • the output produced by the system and any relevant information needed to interpret it in the context of the administrative decision; and
  • a justification of the administrative decision, including the principal factors that led to it. (Appendix C)

The catch, of course, is that it might be impossible for an affected person to know whether a decision has been made with the assistance of an AI technology, as was the case here. Further, the DADM is not effective at capturing informal or ‘off-the-books’ uses of AI tools. The decision in Ali therefore does two important things in the administrative law context. First, it confirms that – in the case of a high impact decision – the right of the individual to an explanation of how the decision was reached as a matter of procedural fairness. Judicial review thus provides recourse for affected individuals – something that the more prophylactic DADM does not. Second, this right includes an obligation to provide details that could either explain or rule out the use of an automated system in the decisional process. In other words, procedural fairness includes a right to know whether and how AI technologies were used in reaching the contested decision. Mere assertions that no algorithms were used in gathering evidence or in making the decision are insufficient – if an automated system might have played a role, the affected individual is entitled to know the details of the process by which the evidence was gathered and the decision reached. Ultimately, what Justice Battista crafts in Ali is not simply a right to an explanation of automated decision-making; rather, it is a right to the explanation of administrative decision-making processes that account for an AI era. In a context in which powerful computing tools are available for both general and personal use, and are not limited to purpose-specific, carefully governed and auditable in-house systems, the ability to demand an explanation of the decisional process in order to rule out the non-transparent use of AI systems seems increasingly important.

Note: The Directive on Automated Decision-Making is currently undergoing its fourth review. You may participate in consultations here.

Login to post comments

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law