Tags
access to information
AI
AIDA
AI governance
AI regulation
Ambush Marketing
artificial intelligence
big data
bill c11
Bill c27
copyright
data governance
data protection
Electronic Commerce
freedom of expression
Geospatial
geospatial data
intellectual property
Internet
internet law
IP
open courts
open data
open government
personal information
pipeda
Privacy
smart cities
trademarks
transparency
|
Displaying items by tag: procedural fairness
Friday, 22 November 2024 08:50
Crafting a More Nuanced Right to an Explanation in Administrative Law: A comment on Ali v. Minister of Public Safety and Emergency PreparednessA recent decision of the Federal Court of Canada (Ali v. Minister of Public Safety and Emergency Preparedness) highlights the role of judicial review in addressing automated decision-making. It also prompts reflection on the limits of emerging codified rights to an explanation. In July 2024, Justice Battista overturned a decision of the Refugee Protection Division (RPD) which had vacated the refugee status of the applicant, Mr. Ali. The decision of the RPD was based largely on a photo comparison that the RPD to conclude that Mr. Ali was not a Somali refugee as he had claimed. Rather, they concluded that he was a Kenyan student who had entered Canada on a student visa in 2016, a few months prior to Mr. Ali’s refugee protection claim. Throughout the proceedings the applicant had sought information about how photos of the Kenyan student had been found and matched with his own. He was concerned that facial recognition technology (FRT) – which has had notorious deficiencies when used to identify persons of colour – had been used. In response, the Minister denied the use of FRT, maintaining instead that the photographs had been found and analyzed through a ‘manual process’. A Canadian Border Services agent subsequently provided an affidavit to the effect that “a confidential manual investigative technique was used” (at para 15). The RPD was satisfied with this assurance. It considered that how the photographs had been gathered was irrelevant to their own capacity as a tribunal to decide based on the photographs before them. They concluded that Mr. Ali had misrepresented his identity. On judicial review, Justice Battista found that the importance of the decision to Mr. Ali and the quasi-judicial nature of the proceedings meant that he was owed a high level of procedural fairness. Because a decision of the RPD cannot be appealed, and because the consequences of revocation of refugee status are very serious (including loss of permanent resident status and possible removal from the country), Justice Battista found that “it is difficult to find a process under [the Immigration and Refugee Protection Act] with a greater imbalance between severe consequences and limited recourse” (at para 23). He found that the RPD had breached Mr. Ali’s right to procedural fairness “when it denied his request for further information about the source and methodology used by the Minister in obtaining and comparing the photographs” (at para 28). Justice Battista ruled that, given the potential consequences for the applicant, disclosure of the methods used to gather the evidence against him “had to be meaningful” (at para 33). He concluded that it was unfair for the RPD “to consider the photographic evidence probative enough for revoking the Applicant’s statuses and at the same time allow that evidence to be shielded from examination for reliability” (at para 37). In addition to finding a breach of procedural fairness, Justice Battista also found that the RPD’s decision was unreasonable. He noted that there had been sufficiently credible evidence before the original RPD refugee determination panel to find that Mr. Ali was a Somali national entitled to refugee protection. None of this evidence had been assessed in the decision of the panel that vacated Mr. Ali’s refugee status. Justice Battista noted that “[t]he credibility of this evidence cannot co-exist with the validity of the RPD vacation panel’s decision” (at para 40). He also noted that the applicant had provided an affidavit describing differences between his photo and that of the Kenyan student; this evidence had not been considered in the RPD’s decision, contributing to its unreasonableness. The RPD also dismissed evidence from a Kenyan official that, based on biometric records analysis, there was no evidence that Mr. Ali was Kenyan. Justice Battista noted that this dismissal of the applicant’s evidence was in “stark contrast to its treatment of the Minister’s photographic evidence” (at para 44). The Ali decision and the right to an explanation Ali is interesting to consider in the context of the emerging right to an explanation of automated decision-making. Such a right is codified for the private sector context in the moribund Bill C-27, and Quebec has enacted a right to an explanation for both public and private sector contexts. Such rights would apply in cases where an automated decision system (ADS) has been used (and in the case of Quebec, the decision must be based “exclusively on an automated processing” of personal information. Yet in Ali there is no proof that the decision was made or assisted by an AI technology – in part because the Minister refused to explain their ‘confidential’ process. Further, the ultimate decision was made by humans. It is unclear how a codified right to an explanation would apply if the threshold for the exercise of the right is based on the obvious and/or exclusive use of an ADS. It is also interesting to consider the outcome here in light of the federal Directive on Automated Decision Making (DADM). The DADM, which largely addresses the requirements for design and development of ADS in the federal public sector, incorporates principles of fairness. It applies to “any system, tool, or statistical model used to make an administrative decision or a related assessment about a client”. It defines an “automated decision system” as “[a]ny technology that either assists or replaces the judgment of human decision-makers […].” In theory, this would include the use of automated systems such as FRT that assist in human decision-making. Where and ADS is developed and used, the DADM imposes transparency obligations, which include an explanation in plain language of:
The catch, of course, is that it might be impossible for an affected person to know whether a decision has been made with the assistance of an AI technology, as was the case here. Further, the DADM is not effective at capturing informal or ‘off-the-books’ uses of AI tools. The decision in Ali therefore does two important things in the administrative law context. First, it confirms that – in the case of a high impact decision – the right of the individual to an explanation of how the decision was reached as a matter of procedural fairness. Judicial review thus provides recourse for affected individuals – something that the more prophylactic DADM does not. Second, this right includes an obligation to provide details that could either explain or rule out the use of an automated system in the decisional process. In other words, procedural fairness includes a right to know whether and how AI technologies were used in reaching the contested decision. Mere assertions that no algorithms were used in gathering evidence or in making the decision are insufficient – if an automated system might have played a role, the affected individual is entitled to know the details of the process by which the evidence was gathered and the decision reached. Ultimately, what Justice Battista crafts in Ali is not simply a right to an explanation of automated decision-making; rather, it is a right to the explanation of administrative decision-making processes that account for an AI era. In a context in which powerful computing tools are available for both general and personal use, and are not limited to purpose-specific, carefully governed and auditable in-house systems, the ability to demand an explanation of the decisional process in order to rule out the non-transparent use of AI systems seems increasingly important. Note: The Directive on Automated Decision-Making is currently undergoing its fourth review. You may participate in consultations here.
Published in
Privacy
Monday, 17 October 2022 04:11
Procedural fairness and AI - Some lessons from a recent decision
Artificial intelligence (AI) is already being used to assist government decision-making, although we have little case law that explores issues of procedural fairness when it comes to automated decision systems. This is why a recent decision of the Federal Court is interesting. In Barre v. Canada (Citizenship and Immigration) two women sought judicial review of a decision of the Refugee Protection Division (RPD) which had stripped them of their refugee status. They raised procedural fairness issues regarding the possible reliance upon an AI tool – in this case facial recognition technology (FRT). The case allows us to consider some procedural fairness guideposts that may be useful where evidence derived from AI-enabled tools is advanced. The Decision of the Refugee Protection Division The applicants, Ms Barre and Ms Hosh, had been granted refugee status after advancing claims related to their fear of sectarian and gender-based violence in their native Somalia. The Minister of Public Safety and Emergency Preparedness (the Minister) later applied under s. 109 of the Immigration and Refugee Protection Act to have that decision vacated on the basis that it was “obtained as a result of directly or indirectly misrepresenting or withholding material facts relating to a relevant matter”. The Minister had provided the RPD with photos that compared Ms Barre and Ms Hosh the applicants) with two Kenyan women who had been admitted to Canada on student visas shortly before Ms Barre and Ms Hosh filed their refugee claims (the claims were accepted in 2017). The applicants argued that the photo comparisons relied upon by the Minister had been made using Clearview AI’s facial recognition service built upon scraped images from social media and other public websites. The Minister objected to arguments and evidence about Clearview AI, maintaining that there was no proof that this service had been used. Clearview AI had ceased providing services in Canada on 6 July 2020, and the RPD accepted the Minister’s argument that it had not been used, finding that “[a]n App that is banned to operate in Canada would certainly not be used by a law enforcement agency such as the CBSA” (at para 7). The Minister had also argued that it did not have to disclose how it arrived at the photo comparisons because of s. 22 of the Privacy Act, and the RPD accepted this assertion. The photo comparisons were given significant weight in the RPD’s decision to overturn the applicants’ refugee status. The RPD found that there were “great similarities” between the photos of the Kenyan students and the applicants, and concluded that they were the same persons. The RPD also considered notes in the Global Case Management System to the effect that the Kenyan students did not attend classes at the school where they were enrolled. In addition, the CBSA submitted affidavits indicating that there was no evidence that the applicants had entered Canada under their own names. The RPD concluded that the applicants were Kenyan citizens who had misrepresented their identity in the refugee proceedings. It found that these factual misrepresentations called into question the credibility of their allegations of persecution. It also found that, since they were Kenyan, they had not advanced claims against their country of nationality in the refugee proceedings, as required by law. The applicants sought judicial review of the decision to revoke their refugee status, arguing that it was unreasonable and breached their rights to procedural fairness. Judicial Review Justice Go of the Federal Court ruled that the decision was unreasonable for a number of reasons. A first error was allowing the introduction of the photo comparisons into evidence “without requiring the Minister to disclose the methodology used in procuring the evidence” (at para 31). The Minister had invoked s. 22 of the Privacy Act, but Justice Go noted that there were many flaws with the Minister’s reliance on s. 22. Section 22 is an exception to an individual’s right of access to their personal information. Justice Go noted that the applicants were not seeking access to their personal information; rather, they were making a procedural fairness argument about the photo comparisons relied upon by the Minister and sought information about how the comparisons had been made. Section 22(2), which was specifically relied upon by the Minister, allows a request for disclosure of personal information to be refused on the basis that it was “obtained or prepared by the Royal Canadian Mounted Police while performing policing services for a province or municipality…”, and this circumstance simply was not relevant. Section 22(1)(b), which was not specifically argued by the Minister, allows for a refusal to disclose personal information where to do so “could reasonably be expected to be injurious to the enforcement of any law of Canada or a province or the conduct of lawful investigations…” Justice Go noted that case law establishes that a court will not support such a refusal on the basis that because there is an investigation, harm from disclosure can be presumed. Instead, the head of an institution must demonstrate a “nexus between the requested disclosure and a reasonable expectation of probable harm” (at para 35, citing Canadian Association of Elizabeth Fry Societies v. Canada). Exceptions to access rights must be given a narrow interpretation, and the burden of demonstrating that a refusal to disclose is justifiable lies with the head of the government institution. Justice Go also noted that “the Privacy Act does not operate “so as to limit access to information to which an individual might be entitled as a result of other legal rules or principles”” (at para 42) such as, in this case, the principles of procedural fairness. Justice Go found that the RPD erred by not clarifying what ‘personal information’ the Minister sought to protect; and by not assessing the basis for the Minister’s 22 arguments. She also noted that the RPD had accepted the Minister’s bald assertions that the CBSA did not rely on Clearview AI. Even if the company had ceased offering its services in Canada by July 6, 2020, there was no evidence regarding the date on which the photo comparisons had been made. Justice Go noted that the RPD failed to consider submissions by the applicants regarding findings by the privacy commissioners of Canada, BC, Alberta and Quebec regarding Clearview AI and its activities, as well as on the “danger of relying on facial recognition software” (at para 46). The Minister argued that even if its s. 22 arguments were misguided, it could still rely upon evidentiary privileges to protect the details of its investigation. Justice Go noted that this was irrelevant in assessing the reasonableness of the RPD’s decision, since such arguments had not been made before or considered by the RPD. She also observed that when parties seek to exempt information from disclosure in a hearing, they are often required at least to provide it to the decision-maker to assess. In this case the RPD did not ask for or assess information on how the investigation had been conducted before deciding that information about it should not be disclosed. She noted that: “The RPD’s swift acceptance of the Minister’s exemption request, in the absence of a cogent explanation for why the information is protected from disclosure, appears to be a departure from its general practice” (at para 55). Justice Go also observed that information about how the photo comparisons were made could well have been relevant to the issues to be determined by the RPD. If the comparisons were generated through use of FRT – whether it was using Clearview AI or the services of another company – “it may call into question the reliability of the Kenyan students’ photos as representing the Applicants, two women of colour who are more likely to be misidentified by facial recognition software than their white cohorts as noted by the studies submitted by the Applicants” (at para 56). No matter how the comparisons were made – whether by a person or by FRT technology – some evidence should have been provided to explain the technique. Justice Go found it unreasonable for the RPD to conclude that the evidence was reliable simply based upon the Minister’s assertions. Justice Go also found that the RPD’s conclusion that the applicants were, in fact, the two Kenyan women, was unreasonable. Among other things, she found that the decision “failed to provide adequate reasons for the RPD’s conclusion that the two Applicants and the two Kenyan students were the same persons based on the photo comparisons” (at para 69). She noted that although the RPD referenced ‘great similarities’ between the women in the two sets of photographs, there were also some marked dissimilarities which were not addressed. There simply was no adequate explanation as to how the conclusion was reached that the applicants were the Kenyan students. The decision of the RPD was quashed and remitted to be reconsidered by a differently constituted panel of the RPD. Ultimately, Justice Go sends a clear message that the Minister cannot simply advance photo comparison evidence without providing an explanation for how that evidence was derived. At the very least, then, there is an obligation to indicate whether an AI technology was used in the decision-making process. Even if there is some legal basis for shielding the details of the Minister’s methods of investigation, there may still need to be some disclosure to the decision-maker regarding the methods used. Justice Go’s decision is also a rebuke of the RPD which accepted the Minister’s evidence on faith and asked no questions about its methodology or probity. In her decision, Justice Go take serious note of concerns about accuracy and bias in the use of FRT, particularly with racialized individuals, and it is clear that these concerns heighten the need for transparency. The decision is important for setting some basic standards to meet when it comes to reviewing evidence that may have been derived using AI. It is also a sobering reminder that those checks and balances failed at first instance – and in a high stakes context.
Published in
Privacy
|
Electronic Commerce and Internet Law in Canada, 2nd EditionPublished in 2012 by CCH Canadian Ltd. Intellectual Property for the 21st CenturyIntellectual Property Law for the 21st Century: Interdisciplinary Approaches |