Teresa Scassa - Blog

Displaying items by tag: facial recognition

Artificial intelligence (AI) is already being used to assist government decision-making, although we have little case law that explores issues of procedural fairness when it comes to automated decision systems. This is why a recent decision of the Federal Court is interesting. In Barre v. Canada (Citizenship and Immigration) two women sought judicial review of a decision of the Refugee Protection Division (RPD) which had stripped them of their refugee status. They raised procedural fairness issues regarding the possible reliance upon an AI tool – in this case facial recognition technology (FRT). The case allows us to consider some procedural fairness guideposts that may be useful where evidence derived from AI-enabled tools is advanced.

The Decision of the Refugee Protection Division

The applicants, Ms Barre and Ms Hosh, had been granted refugee status after advancing claims related to their fear of sectarian and gender-based violence in their native Somalia. The Minister of Public Safety and Emergency Preparedness (the Minister) later applied under s. 109 of the Immigration and Refugee Protection Act to have that decision vacated on the basis that it was “obtained as a result of directly or indirectly misrepresenting or withholding material facts relating to a relevant matter”.

The Minister had provided the RPD with photos that compared Ms Barre and Ms Hosh the applicants) with two Kenyan women who had been admitted to Canada on student visas shortly before Ms Barre and Ms Hosh filed their refugee claims (the claims were accepted in 2017). The applicants argued that the photo comparisons relied upon by the Minister had been made using Clearview AI’s facial recognition service built upon scraped images from social media and other public websites. The Minister objected to arguments and evidence about Clearview AI, maintaining that there was no proof that this service had been used. Clearview AI had ceased providing services in Canada on 6 July 2020, and the RPD accepted the Minister’s argument that it had not been used, finding that “[a]n App that is banned to operate in Canada would certainly not be used by a law enforcement agency such as the CBSA” (at para 7). The Minister had also argued that it did not have to disclose how it arrived at the photo comparisons because of s. 22 of the Privacy Act, and the RPD accepted this assertion.

The photo comparisons were given significant weight in the RPD’s decision to overturn the applicants’ refugee status. The RPD found that there were “great similarities” between the photos of the Kenyan students and the applicants, and concluded that they were the same persons. The RPD also considered notes in the Global Case Management System to the effect that the Kenyan students did not attend classes at the school where they were enrolled. In addition, the CBSA submitted affidavits indicating that there was no evidence that the applicants had entered Canada under their own names. The RPD concluded that the applicants were Kenyan citizens who had misrepresented their identity in the refugee proceedings. It found that these factual misrepresentations called into question the credibility of their allegations of persecution. It also found that, since they were Kenyan, they had not advanced claims against their country of nationality in the refugee proceedings, as required by law. The applicants sought judicial review of the decision to revoke their refugee status, arguing that it was unreasonable and breached their rights to procedural fairness.

Judicial Review

Justice Go of the Federal Court ruled that the decision was unreasonable for a number of reasons. A first error was allowing the introduction of the photo comparisons into evidence “without requiring the Minister to disclose the methodology used in procuring the evidence” (at para 31). The Minister had invoked s. 22 of the Privacy Act, but Justice Go noted that there were many flaws with the Minister’s reliance on s. 22. Section 22 is an exception to an individual’s right of access to their personal information. Justice Go noted that the applicants were not seeking access to their personal information; rather, they were making a procedural fairness argument about the photo comparisons relied upon by the Minister and sought information about how the comparisons had been made. Section 22(2), which was specifically relied upon by the Minister, allows a request for disclosure of personal information to be refused on the basis that it was “obtained or prepared by the Royal Canadian Mounted Police while performing policing services for a province or municipality…”, and this circumstance simply was not relevant.

Section 22(1)(b), which was not specifically argued by the Minister, allows for a refusal to disclose personal information where to do so “could reasonably be expected to be injurious to the enforcement of any law of Canada or a province or the conduct of lawful investigations…” Justice Go noted that case law establishes that a court will not support such a refusal on the basis that because there is an investigation, harm from disclosure can be presumed. Instead, the head of an institution must demonstrate a “nexus between the requested disclosure and a reasonable expectation of probable harm” (at para 35, citing Canadian Association of Elizabeth Fry Societies v. Canada). Exceptions to access rights must be given a narrow interpretation, and the burden of demonstrating that a refusal to disclose is justifiable lies with the head of the government institution. Justice Go also noted that “the Privacy Act does not operate “so as to limit access to information to which an individual might be entitled as a result of other legal rules or principles”” (at para 42) such as, in this case, the principles of procedural fairness.

Justice Go found that the RPD erred by not clarifying what ‘personal information’ the Minister sought to protect; and by not assessing the basis for the Minister’s 22 arguments. She also noted that the RPD had accepted the Minister’s bald assertions that the CBSA did not rely on Clearview AI. Even if the company had ceased offering its services in Canada by July 6, 2020, there was no evidence regarding the date on which the photo comparisons had been made. Justice Go noted that the RPD failed to consider submissions by the applicants regarding findings by the privacy commissioners of Canada, BC, Alberta and Quebec regarding Clearview AI and its activities, as well as on the “danger of relying on facial recognition software” (at para 46).

The Minister argued that even if its s. 22 arguments were misguided, it could still rely upon evidentiary privileges to protect the details of its investigation. Justice Go noted that this was irrelevant in assessing the reasonableness of the RPD’s decision, since such arguments had not been made before or considered by the RPD. She also observed that when parties seek to exempt information from disclosure in a hearing, they are often required at least to provide it to the decision-maker to assess. In this case the RPD did not ask for or assess information on how the investigation had been conducted before deciding that information about it should not be disclosed. She noted that: “The RPD’s swift acceptance of the Minister’s exemption request, in the absence of a cogent explanation for why the information is protected from disclosure, appears to be a departure from its general practice” (at para 55).

Justice Go also observed that information about how the photo comparisons were made could well have been relevant to the issues to be determined by the RPD. If the comparisons were generated through use of FRT – whether it was using Clearview AI or the services of another company – “it may call into question the reliability of the Kenyan students’ photos as representing the Applicants, two women of colour who are more likely to be misidentified by facial recognition software than their white cohorts as noted by the studies submitted by the Applicants” (at para 56). No matter how the comparisons were made – whether by a person or by FRT technology – some evidence should have been provided to explain the technique. Justice Go found it unreasonable for the RPD to conclude that the evidence was reliable simply based upon the Minister’s assertions.

Justice Go also found that the RPD’s conclusion that the applicants were, in fact, the two Kenyan women, was unreasonable. Among other things, she found that the decision “failed to provide adequate reasons for the RPD’s conclusion that the two Applicants and the two Kenyan students were the same persons based on the photo comparisons” (at para 69). She noted that although the RPD referenced ‘great similarities’ between the women in the two sets of photographs, there were also some marked dissimilarities which were not addressed. There simply was no adequate explanation as to how the conclusion was reached that the applicants were the Kenyan students.

The decision of the RPD was quashed and remitted to be reconsidered by a differently constituted panel of the RPD.

Ultimately, Justice Go sends a clear message that the Minister cannot simply advance photo comparison evidence without providing an explanation for how that evidence was derived. At the very least, then, there is an obligation to indicate whether an AI technology was used in the decision-making process. Even if there is some legal basis for shielding the details of the Minister’s methods of investigation, there may still need to be some disclosure to the decision-maker regarding the methods used. Justice Go’s decision is also a rebuke of the RPD which accepted the Minister’s evidence on faith and asked no questions about its methodology or probity. In her decision, Justice Go take serious note of concerns about accuracy and bias in the use of FRT, particularly with racialized individuals, and it is clear that these concerns heighten the need for transparency. The decision is important for setting some basic standards to meet when it comes to reviewing evidence that may have been derived using AI. It is also a sobering reminder that those checks and balances failed at first instance – and in a high stakes context.

Published in Privacy

Clearview AI and its controversial facial recognition technology have been making headlines for weeks now. In Canada, the company is under joint investigation by federal and provincial privacy commissioners. The RCMP is being investigated by the federal Privacy Commissioner after having admitted to using Clearview AI. The Ontario privacy commissioner has expressed serious concerns about reports of Ontario police services adopting the technology. In the meantime, the company is dealing with a reported data breach in which hackers accessed its entire client list.

Clearview AI offers facial recognition technology to ‘law enforcement agencies.’ The term is not defined on their site, and at least one newspaper report suggests that it is defined broadly, with private security (for example university campus police) able to obtain access. Clearview AI scrapes images from publicly accessible websites across the internet and compiles them in a massive database. When a client provides them with an image of a person, they use facial recognition algorithms to match the individual in the image with images in its database. Images in the database are linked to their sources which contain other identifying information (for example, they might link to a Facebook profile page). The use of the service is touted as speeding up all manner of investigations by facilitating the identification of either perpetrators or victims of crimes.

This post addresses a number of different issues raised by the Clearview AI controversy, framed around the two different sets of privacy investigations. The post concludes with additional comments about transparency and accountability.

1. Clearview AI & PIPEDA

Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) applies to the collection, use and disclosure of personal information by private sector organizations engaged in commercial activities. Although Clearview AI is a U.S. company, PIPEDA will still apply if there is a sufficient nexus to Canada. In this case, the service clearly captures data about Canadians, and the facial recognition services are marketed to Canadian law enforcement agencies. This should be enough of a connection.

The federal Privacy Commissioner is joined in his investigation by the Commissioners of Quebec, B.C. and Alberta. Each of these provinces has its own private sector data protection laws that apply to organizations that collect, use and disclose personal information within the borders of their respective province. The joint investigation signals the positive level of collaboration and co-operation that exists between privacy commissioners in Canada. However, as I explain in an earlier post, the relevant laws are structured so that only one statute applies to a particular set of facts. This joint investigation may raise important jurisdictional questions similar to those raised in the Facebook/Cambridge Analytica joint investigation and that were not satisfactorily resolved in that case. It is a minor issue, but nonetheless one that is relevant and interesting from a privacy governance perspective.

The federal Commissioner’s investigation will focus on whether Clearview AI complied with PIPEDA when it collected, used and disclosed the personal information which populates its massive database. Clearview AI’s position on the legality of its actions is clearly based on U.S. law. It states on its website that: “Clearview searches the open web. Clearview does not and cannot search any private or protected info, including in your private social media accounts.” In the U.S., there is much less in the way of privacy protection for information in ‘public’ space. In Canada however, the law is different. Although there is an exception in PIPEDA (and in comparable provincial private sector laws) to the requirement of consent for the collection, use or disclosure of “publicly available information”, this exception is cast in narrow terms. It is certainly not broad enough to encompass information shared by individuals through social media. Interestingly, in hearings into PIPEDA reform, the House of Commons ETHI Committee at one point seemed swayed by industry arguments that PIPEDA should be amended to include websites and social media within the exception for “publicly available personal information”. In an earlier post, I argued that this was a dangerous direction in which to head, and the Clearview AI controversy seems to confirm this. Sharing photographs online for the purposes of social interaction should not be taken as consent to use those images in commercial facial recognition technologies. What is more, the law should not be amended to deem it to be so.

To the extent, then, that the database contains personal information of Canadians that was collected without their knowledge or consent, the conclusion will likely be that there has been a breach of PIPEDA. The further use and disclosure of personal information without consent will also amount to a breach. An appropriate remedy would include ordering Clearview AI to remove all personal information of Canadians that was collected without consent from its database. Unfortunately, the federal Commissioner does not have order-making powers. If the investigation finds a breach of PIPEDA, it will still be necessary to go to Federal Court to ask that court to hold its own hearing, reach its own conclusions, and make an order. This is what is currently taking place in relation the Facebook/Cambridge Analytica investigation, and it makes somewhat of a mockery of our privacy laws. Stronger enforcement powers are on the agenda for legislative reform of PIPEDA, and it is to be hoped that something will be done about this before too long.

 

2. The Privacy Act investigation

The federal Privacy Commissioner has also launched an investigation into the RCMP’s now admitted use of Clearview AI technology. The results of this investigation should be interesting.

The federal Privacy Act was drafted for an era in which government institution generally collected the information they needed and used from individuals. Governments, in providing all manner of services, would compile significant amounts of data, and public sector privacy laws set the rules for governance of this data. These laws were not written for our emerging context in which government institutions increasingly rely on data analytics and data-fueled AI services provided by the private sector. In the Clearview AI situation, it is not the RCMP that has collected a massive database of images for facial recognition. Nor has the RCMP contracted with a private sector company to build this service for it. Instead, it is using Clearview AI’s services to make presumably ad hoc inquiries, seeking identity information in specific instances. It is not clear whether or how the federal Privacy Act will apply in this context. If the focus is on the RCMP’s ‘collection’ and ‘use’ of personal information, it is arguable that this is confined to the details of each separate query, and not to the use of facial recognition on a large scale. The Privacy Act might simply not be up to addressing how government institutions should interact with these data-fuelled private sector services.

The Privacy Act is, in fact, out of date and clearly acknowledged to be so. The Department of Justice has been working on reforms and has attempted some initial consultation. But the Privacy Act has not received the same level of public and media attention as has PIPEDA. And while we might see reform of PIPEDA in the not too distant future, reform of the Privacy Act may not make it onto the legislative agenda of a minority government. If this is the case, it will leave us with another big governance gap for the digital age.

If the Privacy Act is not to be reformed any time soon, it will be very interesting to see what the Privacy Commissioner’s investigation reveals. The interpretation of section 6(2) of the Privacy Act could be of particular importance. It provides that: “A government institution shall take all reasonable steps to ensure that personal information that is used for an administrative purpose by the institution is as accurate, up-to-date and complete as possible.” In 2018 the Supreme Court of Canada issued a rather interesting decision in Ewert v. Canada, which I wrote about here. The case involved a Métis man’s challenge to the use of actuarial risk-assessment tests by Correctional Services Canada to make decisions related to his incarceration. He argued that the tests were “developed and tested on predominantly non-Indigenous populations and that there was no research confirming that they were valid when applied to Indigenous persons.” (at para 12). The Corrections and Conditional Release Act contained language very similar to s. 6(2) of the Privacy Act. The Supreme Court of Canada ruled that this language placed an onus on the CSC to ensure that all of the data it relied upon in its decision-making about inmates met that standard – including the data generated from the use of the assessment tools. This ruling may have very interesting implications not just for the investigation into the RCMP’s use of Clearview’s technology, but also for public sector use of private sector data-fueled analytics and AI where those tools are based upon personal data. The issue is whether, in this case, the RCMP is responsible for ensuring the accuracy and reliability of the data generated by a private sector AI system on which they rely.

One final note on the use of Clearview AI’s services by the RCMP – and by other police services in Canada. A look at Clearview AI’s website reveals its own defensiveness about its technologies, which it describes as helping “to identify child molesters, murderers, suspected terrorists, and other dangerous people quickly, accurately, and reliably to keep our families and communities safe.” Police service representatives have also responded defensively to media inquiries, and their admissions of use come with very few details. If nothing else, this situation highlights the crucial importance of transparency, oversight and accountability in relation to these technologies that have privacy and human rights implications. Transparency can help to identify and examine concerns, and to ensure that the technologies are accurate, reliable and free from bias. Policies need to be put in place to reflect clear decisions about what crimes or circumstances justify the use of these technologies (and which ones do not). Policies should specify who is authorized to make the decision to use this technology and according to what criteria. There should be record-keeping and an audit trail. Keep in mind that technologies of this kind, if unsupervised, can be used to identify, stalk or harass strangers. It is not hard to imagine someone use this technology to identify a person seen with an ex-spouse, or even to identify an attractive woman seen at a bar. They can also be used to identify peaceful protestors. The potential for misuse is enormous. Transparency, oversight and accountability are essential if these technologies are to be used responsibly. The sheepish and vague admissions of use of Clearview AI technology by Canadian police services is a stark reminder that there is much governance work to be done around such technologies in Canada even beyond privacy law issues.

Published in Privacy

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law