Tags
access to information
AI
AIDA
AI governance
AI regulation
Ambush Marketing
artificial intelligence
big data
bill c11
Bill c27
copyright
data governance
data protection
data scraping
data strategy
freedom of expression
Geospatial
geospatial data
intellectual property
Internet
internet law
IP
open courts
open data
open government
personal information
pipeda
Privacy
trademarks
transparency
|
Displaying items by tag: facial recognition
Monday, 24 March 2025 06:50
Routine Retail Facial Recognition Systems an Emerging Privacy No-Go Zone in Canada?The Commission d’accès à l’information du Québec (CAI) has released a decision regarding a pilot project to use facial recognition technology (FRT) in Métro stores in Quebec. When this is paired with a 2023 investigation report of the BC Privacy Commissioner regarding the use of FRT in Canadian Tire Stores in that province, there seems to be an emerging consensus around how privacy law will apply to the use of FRT in the retail sector in Canada. Métro had planned to establish a biometric database to enable the use of FRT at certain of its stores operating under the Métro, Jean Coutu and Super C brands, on a pilot basis. The objective of the system was to reduce shoplifting and fraud. The system would function in conjunction with video surveillance cameras installed at the entrances and exits to the stores. The reference database would consist of images of individuals over the age of majority who had been linked to security incidents involving fraud or shoplifting. Images of all shoppers entering the stores would be captured on the video surveillance cameras and then converted to biometric face prints for matching with the face prints in the reference database. The CAI initiated an investigation after receiving notice from Métro of the creation of the biometric database. The company agreed to put its launch of the project on hold pending the results of the investigation. The Quebec case involved the application of Quebec’s the Act respecting the protection of personal information in the private sector (PPIPS) as well as its Act to establish a legal framework for information technology (LFIT) The LFIT requires an organization that is planning to create a database of “biometric characteristics and measurements” to disclose this fact to the CAI no later than 60 days before it is to be used. The CAI can impose requirements and can also order the use suspended or the database destroyed if it is not in compliance with any such orders or if it “otherwise constitutes an invasion of privacy” (LFIT art. 45). Métro argued that the LFIT required individual consent only for the use of a biometric database to ‘confirm or verify’ the identity of an individual (LFIT s. 44). It maintained that its proposed use was different – the goal was not to confirm or verify the identities of shoppers; rather, it was to identify ‘high risk’ shoppers based on matches with the reference database. The CAI rejected this approach, noting the sensitivity of biometric data. Given the quasi-constitutional status of Canadian data protection laws, the CAI found that a ‘large and liberal’ approach to interpretation of the law was required. The CAI found that Métro was conflating the separate concepts of “verification” and “confirmation” of identity. In this case, the biometric faceprints in the probe images would be used to search for a match in the “persons of interest” database. Even if the goal of the generation of the probe images was not to determine the precise identity of all customers – or to add those face prints to the database – the underlying goal was to verify one attribute of the identity of shoppers – i.e., whether there was a match with the persons of interest database. This brought the system within the scope of the LTIF. The additional information in the persons of interest database, which could include the police report number, a description of the past incident, and related personal information would facilitate the further identification of any matches. Métro also argued that the validation or confirmation of identity did not happen in one single process and that therefore s. 44 of the LTIF was not engaged. The CAI dismissed what it described as the compartmentalisation of the process. Instead, the law required a consideration of the combined effect of all the steps in the operation of the system. The company also argued that they had obtained the consent required under art 12 of the PPIPS. It maintained that the video cameras captured shoppers’ images with their consent, as there was notice of use of the cameras and the shoppers continued into the stores. It argued that the purposes for which it used the biometric data were consistent with the purposes for which the security cameras were installed, making it a permissible secondary use under s. 12(1) of PPIPS. The CAI rejected this argument noting that it was not a question of a single collection and a related secondary use. Rather, the generation of biometric faceprints from images captured on video is an independent collection personal of data. That collection must comply with data protection requirements and cannot be treated a secondary use of already collected data. The system proposed by Métro would be used on any person entering the designated stores, and as such it was an entry requirement. Individuals would have no ability to opt out and still shop, and there were no alternatives to participation in the FRT scheme. Not only is consent not possible for the general population entering the stores, those whose images become part of the persons of interest database would also have no choice in the matter. Métro argued that its obligation to protect its employees and the public outweighed the privacy interests of its customers. The CAI rejected this argument, noting that this was not the test set out in the LTIF, which asked instead whether the database of biometric characteristics “otherwise constitutes an invasion of privacy” (art 45). The CAI was of the view that to create a database of biometric characteristics and to match these characteristics against face prints generated from data captured from the public without their consent in circumstances where the law required consent amounted to a significant infringement of privacy rights. The Commission emphasized again the highly sensitive character of the personal data and issued an order prohibiting the implementation of the proposed system. The December 2023 BC investigation report was based on that province’s Personal Information Protection Act. It followed a commissioner-initiated investigation into the use by several Canadian Tire Stores in BC of FRT systems integrated with video surveillance cameras. Like the Métro pilot, biometric face prints were generated from the surveillance footage and matched against a persons-of-interest database. The stated goals of the systems were similar as well – to reduce shoplifting and enhance the security of the stores. As was the case in Quebec, the BC Commissioner found that the generation of biometric face prints was a new collection of personal information that required express consent. The Commissioner had found that the stores had not provided adequate notice of collection, making the issue of consent moot. However, he went on to find that even if there had been proper notice, express consent had not been obtained, and consent could not be implied in the circumstances. The collection of biometric faceprint data of everyone entering the stores in question was not for a purpose that a reasonable person would consider appropriate, given the acute sensitivity of the data collected and the risks to the individual that might flow from its misuse, inaccuracy, or from data breaches. Interestingly, in BC, the four stores under investigation removed their FRT systems soon after receiving the notice of investigation. During the investigation, the Commissioner found little evidence to support the need for the systems, with store personnel admitting that the systems added little to their normal security functions. He chastised the retailers for failing both to conduct privacy impact assessments prior to adoption and to put in place measures to evaluate the effectiveness and performance of the systems. An important difference between the two cases relates to the ability of the CAI to be proactive. In Quebec, the LTIF requires notice to be provided to the Commissioner of the creation of a biometric database in advance of its implementation. This enabled it to rule on the appropriateness of the system before privacy was adversely impacted on a significant scale. By contrast, the systems in BC were in operation for three years before sufficient awareness surfaced to prompt an investigation. Now that powerful biometric technologies are widely available for retail and other uses, governments should be thinking seriously about reforming private sector privacy laws to provide for advance notice requirements – at the very least, for biometric systems. Following both the Quebec and the BC cases, it is difficult to see how broad-based FRT systems integrated with store security cameras could be deployed in a manner consistent with data protection laws – at least under current shopping business models. This suggests that such uses may be emerging as a de facto no-go zone in Canada. Retailers may argue that this reflects a problem with the law, to the extent that it interferes with their business security needs. Yet if privacy is to mean anything, there must be reasonable limits on the collection of personal data – particularly sensitive data. Just because something can be done, does not mean it should be. Given the rapid advance of technology, we should be carefully attuned to this. Being FRT face-printed each time one goes to the grocery store for a carton of milk may simply be an unacceptably disproportionate response to an admittedly real problem. It is a use of technology that places burdens and risks on ordinary individuals who have not earned suspicion, and who may have few other choices for accessing basic necessities.
Published in
Privacy
Monday, 17 March 2025 06:16
Clearview AI and Compliance with Canadian Privacy LawsThe Clearview AI saga has a new Canadian instalment. In December 2024, the British Columbia Supreme Court rendered a decision on Clearview AI’s application for judicial review of an order issued by the BC Privacy Commissioner. This post explores that decision and some of its implications. The first part sets the context, the next discusses the judicial review decision, and part three looks at the ramifications for Canadian privacy law of the larger (and ongoing) legal battle. Context Late in 2021, the Privacy Commissioners of BC, Alberta, Quebec and Canada issued a joint report on their investigation into Clearview AI (My post on this order is here). Clearview AI, a US-based company, had created a massive facial recognition (FRT) database from images scraped from the internet that it marketed to law enforcement agencies around the world. The investigation was launched after a story broke in the New York Times about Clearview’s activities. Although Canadian police services initially denied using Clearview AI, the RCMP later admitted that it had purchased two licences. Other Canadian police services made use of promotional free accounts. The joint investigation found that Clearview AI had breached the private sector data protection laws of the four investigating jurisdictions by collecting and using sensitive personal information without consent and by doing so for purposes that a reasonable person would not consider appropriate in the circumstances. The practices also violated Quebec’s Act to establish a legal framework for information technology. Clearview AI disagreed with these conclusions. It indicated that it would temporarily cease its operations in Canada but maintained that it was entitled to scrape content from the public web. After failing to respond to the recommendations in the joint report, the Commissioners of Quebec, BC and Alberta issued orders against the company. These orders required Clearview AI to cease offering its services in their jurisdictions, to make best efforts to stop collecting the personal information of those within their respective provincial boundaries, and to delete personal information in its databases that had been improperly collected from those within their boundaries. No order issued from the federal Commissioner, who does not have order making powers under the Personal Information Protection and Electronic Documents Act (PIPEDA). He could have applied to the Federal Court for an order but chose not to do so (more on that in Part 3 of this post). Clearview AI declined to comply with the provincial orders, other than to note that it had already temporarily ceased operations in Canada. It then applied for judicial review of the orders in each of the three provinces. To date, only the challenge to the BC Order has been heard and decided. In the BC application, Clearview argued that the Commissioner’s decision was unreasonable. Specifically, it argued that BC’s Personal Information Protection Act (PIPA) did not apply to Clearview AI, that the information it scraped was exempt from consent requirements because it was “publicly available information”, and that the Commissioner’s interpretation of purposes that a reasonable person would consider appropriate in the circumstances was unreasonable and failed to consider Charter values. In his December 2024 decision, Justice Shergill of the BC Supreme Court disagreed, upholding the Commissioner’s order. The BC Supreme Court Decision on Judicial Review Justice Shergill confirmed that BC’s PIPA applies to Clearview AI’s activities, notwithstanding the fact that Clearview AI is a US-based company. He noted that applying the ‘real and substantial connection’ test – which considers the nature and extent of connections between a party’s activities and the jurisdiction in which proceedings are initiated – leads to that conclusion. There was evidence that Clearview AI’s database had been marketed to and used by police services in BC, as well as by the RCMP which polices many parts of the province. Further, Justice Shergill noted that Clearview’s data scraping practices were carried out worldwide and captured data about BC individuals including, in all likelihood, data from websites hosted in BC. Interestingly, he also found that Clearview’s scraping of images from social media sites such as Facebook, YouTube and Instagram also created sufficient connection, as these sites “undoubtedly have hundreds of thousands if not millions of users in British Columbia” (at para 91). In reaching his conclusion, Justice Shergill emphasized “the important role that privacy plays in the preservation of our societal values, the ‘quasi-constitutional’ status afforded to privacy legislation, and the increasing significance of privacy laws as technology advances” (at para 95). He also found that there was nothing unfair about applying BC’s PIPA to Clearview AI, as the company “chose to enter British Columbia and market its product to local law enforcement agencies. It also chooses to scrape data from the Internet which involves personal information of people in British Columbia” (at para 107). Sections 12(1)(e), 15(1)(e) and 18(1)(e) of PIPA provide exceptions to the requirement of knowledge and consent for the collection, use and disclosure of personal information where “the personal information is available to the public” as set out in regulations. The PIPA Regulations include “printed or electronic publications, including a magazine, book, or newspaper in printed or electronic form.” Similar exceptions are found in the federal PIPEDA and in Alberta’s Personal Information Protection Act. Clearview AI had argued that public internet websites, including social media sites, fell within the category of electronic publications and their scraping was thus exempt from consent requirements. The commissioners disagreed, and Clearview AI challenged this interpretation as unreasonable. Justice Shergill found that the Commissioners’ conclusion that social media websites fell outside the exception for publicly available information was reasonable. The BC Commissioner was entitled to read the list in the PIPA Regulations as a “narrow set of sources” (at para 160). Justice Shergill reviewed the reasoning in the joint report for why social media sites should be treated differently from other types of publications mentioned in the exception. These include the fact that social media sites are dynamic and not static and that individuals exercise a different level of control over their personal information on social media platforms than on news or other such sites. Although the legislation may require a balancing of privacy rights with private sector interests, Justice Shergill found that it was reasonable for the Commissioner to conclude that privacy rights should be given precedence over commercial interests in the overall context of the legislation. Referencing the Supreme Court of Canada’s decision in Lavigne, Justice Shergill noted that “it is the protection of individual privacy that supports the quasi-constitutional status of privacy legislation, not the right of the organization to collect and use personal information” (at para 174). An individual’s ability to control what happens to their personal information is fundamental to the autonomy and dignity protected by privacy rights and “it is thus reasonable to conclude that any exception to these important rights should be interpreted narrowly” (at para 175). Clearview AI argued that posting photos to social media sites reflected an individual’s autonomous choice to surrender the information to the public domain. Justice Shergill preferred the Commissioner’s interpretation, which considered the sensitivity of the biometric information, and the impact its collection and use could have on individuals. He referenced the Supreme Court of Canada’s decision in R. v. Bykovets (my post on this case is here), which emphasized that “individuals ‘may choose to divulge certain information for a limited purpose, or to a limited class of persons, and nonetheless retain a reasonable expectation of privacy” (at para 162, citing para 46 of Bykovets). Clearview AI also argued that the Commissioner was unreasonable in not taking into account Charter values in his interpretation of PIPA. In particular, the company was of the view that the freedom of expression, which guarantees the right both to communicate and to receive information, extended to the ability to access and use publicly available information without restriction. Although Justice Shergill found that the Commissioner could have been more direct in his consideration of Charter values, his decision was still not unreasonable on this point. The Commissioner did not engage with the Charter values issues at length because he did not consider the law to be ambiguous – Charter values-based interpretation comes into play in helping to resolve ambiguities in the law. As Justice Shergill noted, “It is difficult to understand how Clearview’s s. 2(b) Charter rights are infringed through an interpretation of ‘publicly available’ which excludes it from collecting personal information from social media websites without consent” (at para 197). Like its counterpart legislation in Alberta and at the federal level, BC’s PIPA contains a section that articulates the overarching principle, that any collection, use or disclosure of personal information must be for purposes that a reasonable person would consider appropriate in the circumstances. This means, among other things, that even if the exception to consent had applied in this case, the collection and use of the scraped personal information would still have had to have been for a reasonable purpose. The Commissioners had found that overall, Clearview’s scraping of vast quantities of sensitive personal information from the internet to build a massive facial recognition database was not one that a reasonable person would find appropriate in the circumstances. Clearview AI preferred to characterize its purpose as providing a service to the benefit of law enforcement and national security. In their joint report, the Commissioners had rejected this characterization noting that it did not justify the massive, widespread scraping of personal information by a private sector company. Further, the Commissioners had noted that such an activity could have negative consequences for individuals, including cybersecurity risks and risks that errors could lead to reputational harm. They also observed that the activity contributed to “broad-based harm inflicted on all members of society, who find themselves under continual mass surveillance by Clearview based on its indiscriminate scraping and processing of their facial images” (at para 253). Justice Shergill found that the record supported these conclusions, and that the Commissioners’ interpretation of reasonable purposes was reasonable. Clearview AI also argued that the Commissioner’s Order was “unnecessary, unenforceable or overbroad”, and should thus be quashed (at para 258). Justice Shergill accepted the Commissioner’s argument that the order was necessary because Clearview had only temporarily suspended its services in Canada, leaving open the possibility that it would offer its services to Canadian law enforcement agencies in the future. He also accepted the Commissioner’s argument that compliance with the order was possible, noting that Clearview had accepted certain steps for ceasing collection and removing images in its settlement of an Illinois class action lawsuit. The order required the company to use “best efforts”, in an implicit acknowledgement that a perfect solution was likely impossible. Clearview argued that a “best efforts” standard was too vague to be enforceable; Justice Shergill disagreed, noting that courts often used “best efforts language”. Further, and quite interestingly, Justice Shergill noted that “if it is indeed impossible for Clearview to sufficiently identify personal information sourced from people in British Columbia, then this is a situation of Clearview’s own making” (at para 279). He noted that “[i]t is not an answer for Clearview to say that because the data was indiscriminately collected, any order requiring it to cease collecting data of persons present in a particular jurisdiction is unenforceable” (at para 279). Implications This is a significant decision as it upholds interpretations of important provisions of BC PIPA. These provisions are similar to ones in Alberta’s PIPA and in the federal PIPEDA. However, it is far from the end of the Clearview AI saga, and there is much to continue to watch. In the first place, the BC Supreme Court decision is already under appeal to the BC Court of Appeal. If the Court of Appeal upholds this decision, it will be a major victory for the BC Commissioner. Yet, either way, there is likely to be a further application for leave to appeal to the Supreme Court of Canada. It may be years before the issue is finally resolved. In this time, data protection laws in BC, Alberta and at the federal level might well be reformed. It will therefore also be important to examine any new bills to see whether the provisions at issue in this case are addressed in any way or left as is. In the meantime, Clearview AI has also filed for judicial review of the orders of the Quebec and Alberta commissioners, and these applications are moving forward. All three orders (BC, Alberta and Quebec) are based on the same joint findings. A decision by either or both the Quebec or Alberta superior courts that the orders are unreasonable could strike a significant blow for the united front that Canada’s commissioners are increasingly showing on privacy issues that affect all Canadians. There is therefore a great deal riding on the outcomes of these applications. In any event, regardless of the outcomes, expect applications for leave to appeal to the Supreme Court of Canada. Leave to appeal is less likely to be granted if all three provincial courts of appeal take a similar approach to the issues. It is at this point impossible to predict how this litigation will play out. It is notable that the Privacy Commissioner of Canada, who has no order making powers under PIPEDA but who can apply to Federal Court for an order, declined to do so. Under PIPEDA, such an application requires a hearing de novo by the Federal Court – this means that unlike the judicial review proceedings in the other provinces, the Federal Court need not show any deference to the federal Commissioner’s findings. Instead, the Court would proceed to a determination of the issues after hearing and considering the parties’ evidence and argument. One might wonder whether the rather bruising decision of the Federal Court in Privacy Commissioner v. Facebook (which was subsequently overturned by the Federal Court of Appeal) might have influenced the Commissioner to not roll the dice to seek an order with so much at stake. That a hearing de novo before the Federal Court could upset the apple cart of the Commissioners’ attempts to co-ordinate efforts, reduce duplication and harmonize interpretation, is sobering. Yet, it also means that if this litigation saga ends with the conclusion that the orders are reasonable and enforceable, BC, Alberta and Quebec residents will have received results in the form of orders requiring Clearview to delete images and to geo-fence any future collection of images to protect those within those provinces (which will still need to be made enforceable in the US) – while Canadians elsewhere in the country will not. Canadians will need long promised but as yet undelivered reform of PIPEDA to address the ability of the federal Commissioner to issue orders – ones that will be subject to judicial review with appropriate deference, rather than second guessed by the Personal Information and Data Protection Tribunal proposed in Bill C-27.
Concluding thoughts Despite rulings from privacy and data protection commissioners around the world that Clearview AI is in breach of their respective laws, and notwithstanding two class action lawsuits in the US under the Illinois Biometric Information Privacy Act, the company has continued to grow its massive FRT database. At the time of the Canadian investigation, the database was said to hold 3 billion images. Current reports place this number at over 50 billion. Considering the resistance of the company to compliance with Canadian law, this raises the question of what it will take to motivate compliance by resistant organizations. As the proposed amendments to Canada’s federal private sector privacy laws wither on the vine after neglect and mismanagement in their journey through Parliament, this becomes a pressing and important question.
Published in
Privacy
Monday, 17 October 2022 04:11
Procedural fairness and AI - Some lessons from a recent decision
Artificial intelligence (AI) is already being used to assist government decision-making, although we have little case law that explores issues of procedural fairness when it comes to automated decision systems. This is why a recent decision of the Federal Court is interesting. In Barre v. Canada (Citizenship and Immigration) two women sought judicial review of a decision of the Refugee Protection Division (RPD) which had stripped them of their refugee status. They raised procedural fairness issues regarding the possible reliance upon an AI tool – in this case facial recognition technology (FRT). The case allows us to consider some procedural fairness guideposts that may be useful where evidence derived from AI-enabled tools is advanced. The Decision of the Refugee Protection Division The applicants, Ms Barre and Ms Hosh, had been granted refugee status after advancing claims related to their fear of sectarian and gender-based violence in their native Somalia. The Minister of Public Safety and Emergency Preparedness (the Minister) later applied under s. 109 of the Immigration and Refugee Protection Act to have that decision vacated on the basis that it was “obtained as a result of directly or indirectly misrepresenting or withholding material facts relating to a relevant matter”. The Minister had provided the RPD with photos that compared Ms Barre and Ms Hosh the applicants) with two Kenyan women who had been admitted to Canada on student visas shortly before Ms Barre and Ms Hosh filed their refugee claims (the claims were accepted in 2017). The applicants argued that the photo comparisons relied upon by the Minister had been made using Clearview AI’s facial recognition service built upon scraped images from social media and other public websites. The Minister objected to arguments and evidence about Clearview AI, maintaining that there was no proof that this service had been used. Clearview AI had ceased providing services in Canada on 6 July 2020, and the RPD accepted the Minister’s argument that it had not been used, finding that “[a]n App that is banned to operate in Canada would certainly not be used by a law enforcement agency such as the CBSA” (at para 7). The Minister had also argued that it did not have to disclose how it arrived at the photo comparisons because of s. 22 of the Privacy Act, and the RPD accepted this assertion. The photo comparisons were given significant weight in the RPD’s decision to overturn the applicants’ refugee status. The RPD found that there were “great similarities” between the photos of the Kenyan students and the applicants, and concluded that they were the same persons. The RPD also considered notes in the Global Case Management System to the effect that the Kenyan students did not attend classes at the school where they were enrolled. In addition, the CBSA submitted affidavits indicating that there was no evidence that the applicants had entered Canada under their own names. The RPD concluded that the applicants were Kenyan citizens who had misrepresented their identity in the refugee proceedings. It found that these factual misrepresentations called into question the credibility of their allegations of persecution. It also found that, since they were Kenyan, they had not advanced claims against their country of nationality in the refugee proceedings, as required by law. The applicants sought judicial review of the decision to revoke their refugee status, arguing that it was unreasonable and breached their rights to procedural fairness. Judicial Review Justice Go of the Federal Court ruled that the decision was unreasonable for a number of reasons. A first error was allowing the introduction of the photo comparisons into evidence “without requiring the Minister to disclose the methodology used in procuring the evidence” (at para 31). The Minister had invoked s. 22 of the Privacy Act, but Justice Go noted that there were many flaws with the Minister’s reliance on s. 22. Section 22 is an exception to an individual’s right of access to their personal information. Justice Go noted that the applicants were not seeking access to their personal information; rather, they were making a procedural fairness argument about the photo comparisons relied upon by the Minister and sought information about how the comparisons had been made. Section 22(2), which was specifically relied upon by the Minister, allows a request for disclosure of personal information to be refused on the basis that it was “obtained or prepared by the Royal Canadian Mounted Police while performing policing services for a province or municipality…”, and this circumstance simply was not relevant. Section 22(1)(b), which was not specifically argued by the Minister, allows for a refusal to disclose personal information where to do so “could reasonably be expected to be injurious to the enforcement of any law of Canada or a province or the conduct of lawful investigations…” Justice Go noted that case law establishes that a court will not support such a refusal on the basis that because there is an investigation, harm from disclosure can be presumed. Instead, the head of an institution must demonstrate a “nexus between the requested disclosure and a reasonable expectation of probable harm” (at para 35, citing Canadian Association of Elizabeth Fry Societies v. Canada). Exceptions to access rights must be given a narrow interpretation, and the burden of demonstrating that a refusal to disclose is justifiable lies with the head of the government institution. Justice Go also noted that “the Privacy Act does not operate “so as to limit access to information to which an individual might be entitled as a result of other legal rules or principles”” (at para 42) such as, in this case, the principles of procedural fairness. Justice Go found that the RPD erred by not clarifying what ‘personal information’ the Minister sought to protect; and by not assessing the basis for the Minister’s 22 arguments. She also noted that the RPD had accepted the Minister’s bald assertions that the CBSA did not rely on Clearview AI. Even if the company had ceased offering its services in Canada by July 6, 2020, there was no evidence regarding the date on which the photo comparisons had been made. Justice Go noted that the RPD failed to consider submissions by the applicants regarding findings by the privacy commissioners of Canada, BC, Alberta and Quebec regarding Clearview AI and its activities, as well as on the “danger of relying on facial recognition software” (at para 46). The Minister argued that even if its s. 22 arguments were misguided, it could still rely upon evidentiary privileges to protect the details of its investigation. Justice Go noted that this was irrelevant in assessing the reasonableness of the RPD’s decision, since such arguments had not been made before or considered by the RPD. She also observed that when parties seek to exempt information from disclosure in a hearing, they are often required at least to provide it to the decision-maker to assess. In this case the RPD did not ask for or assess information on how the investigation had been conducted before deciding that information about it should not be disclosed. She noted that: “The RPD’s swift acceptance of the Minister’s exemption request, in the absence of a cogent explanation for why the information is protected from disclosure, appears to be a departure from its general practice” (at para 55). Justice Go also observed that information about how the photo comparisons were made could well have been relevant to the issues to be determined by the RPD. If the comparisons were generated through use of FRT – whether it was using Clearview AI or the services of another company – “it may call into question the reliability of the Kenyan students’ photos as representing the Applicants, two women of colour who are more likely to be misidentified by facial recognition software than their white cohorts as noted by the studies submitted by the Applicants” (at para 56). No matter how the comparisons were made – whether by a person or by FRT technology – some evidence should have been provided to explain the technique. Justice Go found it unreasonable for the RPD to conclude that the evidence was reliable simply based upon the Minister’s assertions. Justice Go also found that the RPD’s conclusion that the applicants were, in fact, the two Kenyan women, was unreasonable. Among other things, she found that the decision “failed to provide adequate reasons for the RPD’s conclusion that the two Applicants and the two Kenyan students were the same persons based on the photo comparisons” (at para 69). She noted that although the RPD referenced ‘great similarities’ between the women in the two sets of photographs, there were also some marked dissimilarities which were not addressed. There simply was no adequate explanation as to how the conclusion was reached that the applicants were the Kenyan students. The decision of the RPD was quashed and remitted to be reconsidered by a differently constituted panel of the RPD. Ultimately, Justice Go sends a clear message that the Minister cannot simply advance photo comparison evidence without providing an explanation for how that evidence was derived. At the very least, then, there is an obligation to indicate whether an AI technology was used in the decision-making process. Even if there is some legal basis for shielding the details of the Minister’s methods of investigation, there may still need to be some disclosure to the decision-maker regarding the methods used. Justice Go’s decision is also a rebuke of the RPD which accepted the Minister’s evidence on faith and asked no questions about its methodology or probity. In her decision, Justice Go take serious note of concerns about accuracy and bias in the use of FRT, particularly with racialized individuals, and it is clear that these concerns heighten the need for transparency. The decision is important for setting some basic standards to meet when it comes to reviewing evidence that may have been derived using AI. It is also a sobering reminder that those checks and balances failed at first instance – and in a high stakes context.
Published in
Privacy
Monday, 02 March 2020 08:41
Privacy investigations into Clearview AI in Canada: Old laws and new tech
Clearview AI and its controversial facial recognition technology have been making headlines for weeks now. In Canada, the company is under joint investigation by federal and provincial privacy commissioners. The RCMP is being investigated by the federal Privacy Commissioner after having admitted to using Clearview AI. The Ontario privacy commissioner has expressed serious concerns about reports of Ontario police services adopting the technology. In the meantime, the company is dealing with a reported data breach in which hackers accessed its entire client list. Clearview AI offers facial recognition technology to ‘law enforcement agencies.’ The term is not defined on their site, and at least one newspaper report suggests that it is defined broadly, with private security (for example university campus police) able to obtain access. Clearview AI scrapes images from publicly accessible websites across the internet and compiles them in a massive database. When a client provides them with an image of a person, they use facial recognition algorithms to match the individual in the image with images in its database. Images in the database are linked to their sources which contain other identifying information (for example, they might link to a Facebook profile page). The use of the service is touted as speeding up all manner of investigations by facilitating the identification of either perpetrators or victims of crimes. This post addresses a number of different issues raised by the Clearview AI controversy, framed around the two different sets of privacy investigations. The post concludes with additional comments about transparency and accountability. 1. Clearview AI & PIPEDA Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) applies to the collection, use and disclosure of personal information by private sector organizations engaged in commercial activities. Although Clearview AI is a U.S. company, PIPEDA will still apply if there is a sufficient nexus to Canada. In this case, the service clearly captures data about Canadians, and the facial recognition services are marketed to Canadian law enforcement agencies. This should be enough of a connection. The federal Privacy Commissioner is joined in his investigation by the Commissioners of Quebec, B.C. and Alberta. Each of these provinces has its own private sector data protection laws that apply to organizations that collect, use and disclose personal information within the borders of their respective province. The joint investigation signals the positive level of collaboration and co-operation that exists between privacy commissioners in Canada. However, as I explain in an earlier post, the relevant laws are structured so that only one statute applies to a particular set of facts. This joint investigation may raise important jurisdictional questions similar to those raised in the Facebook/Cambridge Analytica joint investigation and that were not satisfactorily resolved in that case. It is a minor issue, but nonetheless one that is relevant and interesting from a privacy governance perspective. The federal Commissioner’s investigation will focus on whether Clearview AI complied with PIPEDA when it collected, used and disclosed the personal information which populates its massive database. Clearview AI’s position on the legality of its actions is clearly based on U.S. law. It states on its website that: “Clearview searches the open web. Clearview does not and cannot search any private or protected info, including in your private social media accounts.” In the U.S., there is much less in the way of privacy protection for information in ‘public’ space. In Canada however, the law is different. Although there is an exception in PIPEDA (and in comparable provincial private sector laws) to the requirement of consent for the collection, use or disclosure of “publicly available information”, this exception is cast in narrow terms. It is certainly not broad enough to encompass information shared by individuals through social media. Interestingly, in hearings into PIPEDA reform, the House of Commons ETHI Committee at one point seemed swayed by industry arguments that PIPEDA should be amended to include websites and social media within the exception for “publicly available personal information”. In an earlier post, I argued that this was a dangerous direction in which to head, and the Clearview AI controversy seems to confirm this. Sharing photographs online for the purposes of social interaction should not be taken as consent to use those images in commercial facial recognition technologies. What is more, the law should not be amended to deem it to be so. To the extent, then, that the database contains personal information of Canadians that was collected without their knowledge or consent, the conclusion will likely be that there has been a breach of PIPEDA. The further use and disclosure of personal information without consent will also amount to a breach. An appropriate remedy would include ordering Clearview AI to remove all personal information of Canadians that was collected without consent from its database. Unfortunately, the federal Commissioner does not have order-making powers. If the investigation finds a breach of PIPEDA, it will still be necessary to go to Federal Court to ask that court to hold its own hearing, reach its own conclusions, and make an order. This is what is currently taking place in relation the Facebook/Cambridge Analytica investigation, and it makes somewhat of a mockery of our privacy laws. Stronger enforcement powers are on the agenda for legislative reform of PIPEDA, and it is to be hoped that something will be done about this before too long.
2. The Privacy Act investigation The federal Privacy Commissioner has also launched an investigation into the RCMP’s now admitted use of Clearview AI technology. The results of this investigation should be interesting. The federal Privacy Act was drafted for an era in which government institution generally collected the information they needed and used from individuals. Governments, in providing all manner of services, would compile significant amounts of data, and public sector privacy laws set the rules for governance of this data. These laws were not written for our emerging context in which government institutions increasingly rely on data analytics and data-fueled AI services provided by the private sector. In the Clearview AI situation, it is not the RCMP that has collected a massive database of images for facial recognition. Nor has the RCMP contracted with a private sector company to build this service for it. Instead, it is using Clearview AI’s services to make presumably ad hoc inquiries, seeking identity information in specific instances. It is not clear whether or how the federal Privacy Act will apply in this context. If the focus is on the RCMP’s ‘collection’ and ‘use’ of personal information, it is arguable that this is confined to the details of each separate query, and not to the use of facial recognition on a large scale. The Privacy Act might simply not be up to addressing how government institutions should interact with these data-fuelled private sector services. The Privacy Act is, in fact, out of date and clearly acknowledged to be so. The Department of Justice has been working on reforms and has attempted some initial consultation. But the Privacy Act has not received the same level of public and media attention as has PIPEDA. And while we might see reform of PIPEDA in the not too distant future, reform of the Privacy Act may not make it onto the legislative agenda of a minority government. If this is the case, it will leave us with another big governance gap for the digital age. If the Privacy Act is not to be reformed any time soon, it will be very interesting to see what the Privacy Commissioner’s investigation reveals. The interpretation of section 6(2) of the Privacy Act could be of particular importance. It provides that: “A government institution shall take all reasonable steps to ensure that personal information that is used for an administrative purpose by the institution is as accurate, up-to-date and complete as possible.” In 2018 the Supreme Court of Canada issued a rather interesting decision in Ewert v. Canada, which I wrote about here. The case involved a Métis man’s challenge to the use of actuarial risk-assessment tests by Correctional Services Canada to make decisions related to his incarceration. He argued that the tests were “developed and tested on predominantly non-Indigenous populations and that there was no research confirming that they were valid when applied to Indigenous persons.” (at para 12). The Corrections and Conditional Release Act contained language very similar to s. 6(2) of the Privacy Act. The Supreme Court of Canada ruled that this language placed an onus on the CSC to ensure that all of the data it relied upon in its decision-making about inmates met that standard – including the data generated from the use of the assessment tools. This ruling may have very interesting implications not just for the investigation into the RCMP’s use of Clearview’s technology, but also for public sector use of private sector data-fueled analytics and AI where those tools are based upon personal data. The issue is whether, in this case, the RCMP is responsible for ensuring the accuracy and reliability of the data generated by a private sector AI system on which they rely. One final note on the use of Clearview AI’s services by the RCMP – and by other police services in Canada. A look at Clearview AI’s website reveals its own defensiveness about its technologies, which it describes as helping “to identify child molesters, murderers, suspected terrorists, and other dangerous people quickly, accurately, and reliably to keep our families and communities safe.” Police service representatives have also responded defensively to media inquiries, and their admissions of use come with very few details. If nothing else, this situation highlights the crucial importance of transparency, oversight and accountability in relation to these technologies that have privacy and human rights implications. Transparency can help to identify and examine concerns, and to ensure that the technologies are accurate, reliable and free from bias. Policies need to be put in place to reflect clear decisions about what crimes or circumstances justify the use of these technologies (and which ones do not). Policies should specify who is authorized to make the decision to use this technology and according to what criteria. There should be record-keeping and an audit trail. Keep in mind that technologies of this kind, if unsupervised, can be used to identify, stalk or harass strangers. It is not hard to imagine someone use this technology to identify a person seen with an ex-spouse, or even to identify an attractive woman seen at a bar. They can also be used to identify peaceful protestors. The potential for misuse is enormous. Transparency, oversight and accountability are essential if these technologies are to be used responsibly. The sheepish and vague admissions of use of Clearview AI technology by Canadian police services is a stark reminder that there is much governance work to be done around such technologies in Canada even beyond privacy law issues.
Published in
Privacy
|
Electronic Commerce and Internet Law in Canada, 2nd EditionPublished in 2012 by CCH Canadian Ltd. Intellectual Property for the 21st CenturyIntellectual Property Law for the 21st Century: Interdisciplinary Approaches |