Teresa Scassa - Blog

Displaying items by tag: data protection

Ontario’s Information and Privacy Commissioner has released a report on an investigation into the use by McMaster University of artificial intelligence (AI)-enabled remote proctoring software. In it, Commissioner Kosseim makes findings and recommendations under the province’s Freedom of Information and Protection of Privacy Act (FIPPA) which applies to Ontario universities. Interestingly, noting the absence of provincial legislation or guidance regarding the use of AI, the Commissioner provides additional recommendations on the adoption of AI technologies by public sector bodies.

AI-enabled remote proctoring software saw a dramatic uptake in use during the pandemic as university classes migrated online. It was also widely used by professional societies and accreditation bodies. Such software monitors those writing online exams in real-time, recording both audio and video, and using AI to detect anomalies that may indicate that cheating is taking place. Certain noises or movements generate ‘flags’ that lead to further analysis by AI and ultimately by the instructor. If the flags are not resolved, academic integrity proceedings may ensue. Although many universities, including the respondent McMaster, have since returned to in-person exam proctoring, AI-enabled remote exam surveillance remains an option where in-person invigilation is not possible. This can include in courses delivered online to students in diverse and remote locations.

The Commissioner’s investigation related to the use by McMaster University of two services offered by the US-based company Respondus: Respondus Lockdown Browser and Respondus Monitor. Lockdown Browser consists of software downloaded by students onto their computers that blocks access to the internet and to other files on the computer during an exam. Respondus Monitor is the AI-enabled remote proctoring application. This post focuses on Respondus Monitor.

AI-enabled remote proctoring systems have raised concerns about both privacy and broader human rights issues. These include the intrusiveness of the constant audio and video monitoring, the capturing of data from private spaces, uncertainty over the treatment of personal data collected by such systems, adverse impacts on already marginalised students, and the enhanced stress and anxiety that comes from both constant surveillance and easily triggered flags. The broader human rights issues, however, are an uncomfortable fit with public sector data protection law.

Commissioner Kosseim begins with the privacy issues, finding that Respondus Monitor collects personal information that includes students’ names and course information, images of photo identification documents, and sensitive biometric data in audio and video recordings. Because the McMaster University Act empowers the university to conduct examinations and appoint examiners, the Commissioner found that the collection was carried out as part of a lawfully authorized activity. Although exam proctoring had chiefly been conducted in-person prior to the pandemic, she found that there was no “principle of statute or common law that would confine the method by which the proctoring of examinations may be conducted by McMaster to an in-person setting” (at para 48). Further, she noted that even post-pandemic, there might still be reasons to continue to use remote proctoring in some circumstances. She found that the university had a legitimate interest in attempting to curb cheating, noting that evidence suggested an upward trend in academic integrity cases, and a particular spike during the pandemic. She observed that “by incorporating online proctoring into its evaluation methods, McMaster was also attempting to address other new challenges that arise in an increasingly digital and remote learning context” (at para 50).

The collection of personal information must be necessary to a lawful authorized activity carried out by a public body. Commissioner Kosseim found that the information captured by Respondus Monitor – including the audio and video recordings – was “technically necessary for the purpose of conducting and proctoring the exams” (at para 60). Nevertheless, she expressed concerns over the increased privacy risks that accompany this continual surveillance of examinees. She was also troubled by McMaster’s assertion that it “retains complete autonomy, authority, and discretion to employ proctored online exams, prioritizing administrative efficiency and commercial viability, irrespective of necessity” (at para 63). She found that the necessity requirement in s. 38(2) of FIPPA applied, and that efficiency or commercial advantage could not displace it. She noted that the kind of personal information collected by Respondus Monitor was particularly sensitive, creating “risks of unfair allegations or decisions being made about [students] based on inaccurate information” (at para 66). In her view, “[t]hese risks must be appropriately mitigated by effective guardrails that the university should have in place to govern its adoption and use of such technologies” (at para 66).

FIPPA obliges public bodies to provide adequate notice of the collection of personal information. Commissioner Kosseim reviewed the information made available to students by McMaster University. Although she found overall that it provided students with useful information, students had to locate different pieces of information on different university websites. The need to check multiple sites to get a clear picture of the operation of Respondus Monitor did not satisfy the notice requirement, and the Commissioner recommended that the university prepare a “clear and comprehensive statement either in a single source document, or with clear cross-references to other related documents” (at para 70).

Section 41(1) of FIPPA limits the use of personal information collected by a public body to the purpose for which it was obtained or compiled, or for a consistent purpose. Although the Commissioner found that the analysis of the audio and video recordings to generate flags was consistent with the collection of that information, the use by Respondus of samples of the recordings to improve its own systems – or to allow third party research – was not. On this point, there was an important difference in interpretation. Respondus appeared to define personal information as personal identifiers such as names and ID numbers; it treated audio and video clips that lacked such identifiers as “anonymized”. However, under FIPPA audio and video recordings of individuals are personal information. No provision was made for students either to consent to or opt out of this secondary use of their personal information. Commissioner Kosseim noted that Respondus had made public statements that when operating in some jurisdictions (including California and EU members states) it did not use audio or video recordings for research or to improve its products or services. She recommended that McMaster obtain a similar undertaking from Respondus to not use its students’ information for these purposes. The Commissioner also noted that Respondus’ treating the audio and video recordings as anonymized data meant that it did not have adequate safeguards in place for this personal information.

Respondus’ Terms of Service provide that the company reserved the right to disclose personal information for law enforcement purposes. Commissioner Kosseim found that McMaster should require, in its contact with Respondus, that Respondus notify it promptly of any compelled disclosure of its students’ personal information to law enforcement or to government, and to limit any such disclosure to the specific information it is legally required to disclose. She also set a retention limit for the audio and video recordings at one year, with confirmation to be provided by Respondus of deletions after the end of this period.

One of the most interesting aspects of this report is the section titled “Other Recommendations” in which the Commissioner addresses the adoption of an AI-enabled technology by a public institution in a context in which “there is no current law or binding policy specifically governing the use of artificial intelligence in Ontario’s public sector.” (at para 134). The development and adoption of these technologies is outpacing the evolution of law and policy, leaving important governance gaps. In May 2023, the Commissioner Kosseim and Commissioner DeGuire of the Ontario Human Rights Commission issued a joint statement urging the Ontario government to take action to put in place an accountability framework for public sector AI. Even as governments acknowledge that these technologies create risks of discriminatory bias and other potential harms, there remains little to govern AI systems outside the piecemeal coverage offered by existing laws such as, in this case, FIPPA. Although the Commissioner’s interpretation and application of FIPPA addressed issues relating to the collection, use and disclosure of personal information, there remain important issues that cannot be addressed through privacy legislation.

Commissioner Kosseim acknowledged that McMaster University had “already carried out a level of due diligence prior to adopting Respondus Monitor” (at para 138). Nevertheless, given the risks and potential harms of AI-enabled technologies, she made a number of further recommendations. The first was to conduct an Algorithmic Impact Assessment (AIA) in addition to a Privacy Impact Assessment. She suggested that the federal government’s AIA tool could be a useful guide while waiting for one to be developed for Ontario. An AIA could allow the adopter of an AI system to have better insight into the data used to train the algorithms, and could assess impacts on students going beyond privacy (which might include discrimination, increased stress, and harms from false positive flags). She also called for meaningful consultation and engagement with those affected by the adoption of the technology taking place both before the adoption of the system and on an ongoing basis thereafter. Although the university may have had to react very quickly given that the first COVID shutdown occurred shortly before an exam period, an iterative engagement process even now would be useful “for understanding the full scope of potential issue that may arise, and how these may impact, be perceived, and be experienced by others” (at para 142). She noted that this type of engagement would allow adopters to be alert and responsive to problems both prior to adoption and as they arise during deployment. She also recommended that the consultations include experts in both privacy and human rights, as well as those with technological expertise.

Commissioner Kosseim also recommended that the university consider providing students with ways to opt out of the use of these technologies other than through requesting accommodations related to disabilities. She noted “AI-powered technologies may potentially trigger other protected grounds under human rights that require similar accommodations, such as color, race or ethnic origin” (at para 147). On this point, it is worth noting that the use of remote proctoring software creates a context in which some students may need to be accommodated for disabilities or other circumstances that have nothing to do with their ability to write their exam, but rather that impact the way in which the proctoring systems read their faces, interpret their movements, or process the sounds in their homes. Commissioner Kosseim encouraged McMaster University “to make special arrangements not only for students requesting formal accommodation under a protected ground in human rights legislation, but also for any other students having serious apprehensions about the AI-enabled software and the significant impacts it can have on them and their personal information” (at para 148).

Commissioner Kosseim also recommended that there be an appropriate level of human oversight to address the flagging of incidents during proctoring. Although flags were to be reviewed by instructors before deciding whether to proceed to an academic integrity investigation, the Commissioner found it unclear whether there was a mechanism for students to challenge or explain flags prior to escalation to the investigation stage. She recommended that there be such a procedure, and, if there already was one, that it be explained clearly to students. She further recommended that a public institution’s inquiry into the suitability for adoption of an AI-enabled technology should take into account more than just privacy considerations. For example, the public body’s inquiries should consider the nature and quality of training data. Further, the public body should remain accountable for its use of AI technologies “throughout their lifecycle and across the variety of circumstances in which they are used” (at para 165). Not only should the public body monitor the performance of the tool and alert the supplier of any issues, the supplier should be under a contractual obligation to inform the public body of any issues that arise with the system.

The outcome of this investigation offers important lessons and guidance for universities – and for other public bodies – regarding the adoption of third-party AI-enabled services. For the many Ontario universities that adopted remote proctoring during the pandemic, there are recommendations that should push those still using these technologies to revisit their contracts with vendors – and to consider putting in place processes to measure and assess the impact of these technologies. Although some of these recommendations fall outside the scope of FIPPA, the advice is still sage and likely anticipates what one can only hope is imminent guidance for Ontario’s public sector.

Published in Privacy

On October 26, 2023, I appeared as a witness before the INDU Committee of the House of Commons which is holding hearings on Bill C-27. Although I would have preferred to address the Artificial Intelligence and Data Act, it was clear that the Committee was prioritizing study of the Consumer Protection and Privacy Act in part because the Minister of Industry had yet to produce the text of amendments to the AI and Data Act which he had previously outlined in a letter to the Committee Chair. It is my understanding that witnesses will not be called twice. As a result, I will be posting my comments on the AI and Data Act on my blog.

The other witnesses heard at the same time included Colin Bennett, Michael Geist, Vivek Krishnamurthy and Brenda McPhail. The recording of that session is available here.

__________

Thank you, Mr Chair, for the invitation to address this committee.

I am a law professor at the University of Ottawa, where I hold the Canada Research Chair in Information Law and Policy. I appear today in my personal capacity. I have concerns with both the CPPA and AIDA. Many of these have been communicated in my own writings and in the report submitted to this committee by the Centre for Digital Rights. My comments today focus on the Consumer Privacy Protection Act. I note, however, that I have very substantial concerns about the AI and Data Act and would be happy to answer questions on it as well.

Let me begin by stating that I am generally supportive of the recommendations of Commissioner Dufresne for the amendment of Bill C-27 set out in his letter of April 26, 2023, to the Chair of this Committee. I will also address 3 other points.

The Minister has chosen to retain consent as the backbone of the CPPA, with specific exceptions to consent. One of the most significant of these is the “legitimate interest” exception in s. 18(3). This allows organizations to collect or use personal information without knowledge or consent if it is for an activity in which an organization has a legitimate interest. There are guardrails: the interest must outweigh any adverse effects on the individual; it must be one which a reasonable person would expect; and the information must not be collected or used to influence the behaviour or decisions of the individual. There are also additional documentation and mitigation requirements.

The problem lies in the continuing presence of “implied consent” in section 15(5) of the CPPA. PIPEDA allowed for implied consent because there were circumstances where it made sense, and there was no “legitimate interest” exception. However, in the CPPA, the legitimate interest exception does the work of implied consent. Leaving implied consent in the legislation provides a way to get around the guardrails in s. 18(3) (an organization can opt for the ‘implied consent’ route instead of legitimate interest). It will create confusion for organizations that might struggle to understand which is the appropriate approach. The solution is simple: get rid of implied consent. I note that “implied consent” is not a basis for processing under the GDPR. Consent must be express or processing must fall under another permitted ground.

My second point relates to s. 39 of the CPPA, which is an exception to an individual’s knowledge and consent where information is disclosed to a potentially very broad range of entities for “socially beneficial purposes”. Such information need only be de-identified – not anonymized – making it more vulnerable to reidentification. I question whether there is social licence for sharing de-identified rather than anonymized data for these purposes. I note that s. 39 was carried over verbatim from C-11, when “de-identify” was defined to mean what we understand as “anonymize”.

Permitting disclosure for socially beneficial purposes is a useful idea, but s. 39, especially with the shift in meaning of “de-identify”, lacks necessary safeguards. First, there is no obvious transparency requirement. If we are to learn anything from the ETHI Committee inquiry into PHAC’s use of Canadians’ mobility data, transparency is fundamentally important. At the very least, there should be a requirement that written notice of data sharing for socially beneficial purposes be given to the Privacy Commissioner of Canada; ideally there should also be a requirement for public notice. Further, s. 39 should provide that any such sharing be subject to a data sharing agreement, which should also be provided to the Privacy Commissioner. None of this is too much to ask where Canadians’ data are conscripted for public purposes. Failure to ensure transparency and some basic measure of oversight will undermine trust and legitimacy.

My third point relates to the exception to knowledge and consent for publicly available personal information. Bill C-27 reproduces PIPEDA’s provision on publicly available personal information, providing in s. 51 that “An organization may collect, use or disclose an individual’s personal information without their knowledge or consent if the personal information is publicly available and is specified by the regulations.” We have seen the consequences of data scraping from social media platforms in the case of Clearview AI, which used scraped photographs to build a massive facial recognition database. The Privacy Commissioner takes the position that personal information on social media platforms does not fall within the “publicly available personal information” exception. Yet not only could this approach be upended in the future by the new Personal Information and Data Protection Tribunal, it could also easily be modified by new regulations. Recognizing the importance of s. 51, former Commissioner Therrien had recommended amending it to add that the publicly available personal information be such “that the individual would have no reasonable expectation of privacy”. An alternative is to incorporate the text of the current Regulations Specifying Publicly Available Information into the CPPA, revising them to clarify scope and application in our current data environment. I would be happy to provide some sample language.

This issue should not be left to regulations. The amount of publicly available personal information online is staggering, and it is easily susceptible to scraping and misuse. It should be clear and explicit in the law that personal data cannot be harvested from the internet, except in limited circumstances set out in the statute.

Finally, I add my voice to those of so many others in saying that the data protection obligations set out in the CPPA should apply to political parties. It is unacceptable that they do not.

Published in Privacy

The following is a short excerpt from a new paper which looks at the public sector use of private sector personal data (Teresa Scassa, “Public Sector Use of Private Sector Personal Data: Towards Best Practices”, forthcoming in (2024) 47:2 Dalhousie Law Journal ) The full pre-print version of the paper is available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4538632

Governments seeking to make data-driven decisions require the data to do so. Although they may already hold large stores of administrative data, their ability to collect new or different data is limited both by law and by practicality. In our networked, Internet of Things society, the private sector has become a source of abundant data about almost anything – but particularly about people and their activities. Private sector companies collect a wide variety of personal data, often in high volumes, rich in detail, and continuously over time. Location and mobility data, for example, are collected by many different actors, from cellular service providers to app developers. Financial sector organizations amass rich data about the spending and borrowing habits of consumers. Even genetic data is collected by private sector companies. The range of available data is constantly broadening as more and more is harvested, and as companies seek secondary markets for the data they collect.

Public sector use of private sector data is fraught with important legal and public policy considerations. Chief among these is privacy since access to such data raises concerns about undue government intrusion into private lives and habits. Data protection issues implicate both public and private sector actors in this context, and include notice and consent, as well as data security. And, where private sector data is used to shape government policies and actions, important questions about ethics, data quality, the potential for discrimination, and broader human rights questions also arise. Alongside these issues are interwoven concerns about transparency, as well as necessity and proportionality when it comes to the conscription by the public sector of data collected by private companies.

This paper explores issues raised by public sector access to and use of personal data held by the private sector. It considers how such data sharing is legally enabled and within what parameters. Given that laws governing data sharing may not always keep pace with data needs and public concerns, this paper also takes a normative approach which examines whether and in what circumstances such data sharing should take place. To provide a factual context for discussion of the issues, the analysis in this paper is framed around two recent examples from Canada that involved actual or attempted access by government agencies to private sector personal data for public purposes. The cases chosen are different in nature and scope. The first is the attempted acquisition and use by Canada’s national statistics organization, Statistics Canada (StatCan), of data held by credit monitoring companies and financial institutions to generate economic statistics. The second is the use, during the COVID-19 pandemic, of mobility data by the Public Health Agency of Canada (PHAC) to assess the effectiveness of public health policies in reducing the transmission of COVID-19 during lockdowns. The StatCan example involves the compelled sharing of personal data by private sector actors; while the PHAC example involves a government agency that contracted for the use of anonymized data and analytics supplied by private sector companies. Each of these instances generated significant public outcry. This negative publicity no doubt exceeded what either agency anticipated. Both believed that they had a legal basis to gather and/or use the data or analytics, and both believed that their actions served the public good. Yet the outcry is indicative of underlying concerns that had not properly been addressed.

Using these two quite different cases as illustrations, the paper examines the issues raised by the use of private sector data by government. Recognizing that such practices are likely to multiply, it also makes recommendations for best practices. Although the examples considered are Canadian and are shaped by the Canadian legal context, most of the issues they raise are of broader relevance. Part I of this paper sets out the two case studies that are used to tease out and illustrate the issues raised by public sector use of private sector data. Part II discusses the different issues and makes recommendations.

The full pre-print version of the paper is available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4538632

Published in Privacy

A recent decision of the Federal Court of Canada ends (subject to any appeal) the federal Privacy Commissioner’s attempt to obtain an order against Facebook in relation to personal information practices linked to the Cambridge Analytica scandal. Following a joint investigation with British Columbia’s Information and Privacy Commissioner, the Commissioners had issued a Report of Findings in 2019. The Report concluded that Facebook had breached Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) and B.C.’s Personal Information Protection Act by failing to obtain appropriate consent, failing to adequately safeguard the data of its users and failing to be accountable for the data under its control. Under PIPEDA, the Privacy Commissioner has no order-making powers and can only make non-binding recommendations. For an order to be issued under PIPEDA, an application must be made to the Federal Court under s. 15, either by the complainant, or by the Privacy Commissioner with the complainant’s permission. The proceeding before the court is de novo, meaning that the court renders its own decision on whether there has been a breach of PIPEDA based upon the evidence presented to it.

The Cambridge Analytica scandal involved a researcher who developed a Facebook app. Through this app, the developer collected user data, ostensibly for research purposes. That data was later disclosed to third parties who used it to develop “psychographic” models for purposes of targeting political messages towards segments of Facebook users” (at para 35). It is important to note here that the complaint was not against the app developer, but rather against Facebook. Essentially, the complainants were concerned that Facebook did not adequately protect its users’ privacy. Although it had put in place policies and requirements for third party app developers, the complainants were concerned that it did not adequately monitor the third-party compliance with its policies.

The Federal Court dismissed the Privacy Commissioner’s application largely because of a lack of evidence to establish that Facebook had failed to meet its PIPEDA obligations to safeguard its users’ personal information. Referring to it as an “evidentiary vacuum” (para 71), Justice Manson found that there was a lack of expert evidence regarding what Facebook might have done differently. He also found that there was no evidence from users regarding their expectations of privacy on Facebook. The Court chastised the Commissioner, stating “ultimately it is the Commissioner’s burden to establish a breach of PIPEDA on the basis of evidence, not speculation and inferences derived from a paucity of material facts” (at para 72). Justice Manson found the evidence presented by the Commissioner to be unpersuasive, speculative, and required the court to draw “unsupported inferences”. He was unsympathetic to the Commissioner’s explanation that it did not use its statutory powers to compel evidence (under s. 12.1 of PIPEDA) because “Facebook would not have complied or would have had nothing to offer” (at para 72). Justice Manson noted that had Facebook failed to comply with requests under s. 12.1, the Commissioner could have challenged the refusal.

Yet there is more to this decision than just a dressing down of the Commissioner’s approach to the case. In discussing “meaningful consent” under PIPEDA, Justice Manson frames the question before the court as “whether Facebook made reasonable efforts to ensure users and users’ Facebook friends were advised of the purposes for which their information would be used by third-party applications” (at para 63). This argument is reflected in the Commissioner’s position that Facebook should have done more to ensure that third party app developers on its site complied with their contractual obligations, including those that required developers to obtain consent from app users to the collection of personal data. Facebook’s position was that PIPEDA only requires that it make reasonable efforts to protect the personal data of its users, and that it had done so through its “combination of network-wide policies, user controls and educational resources” (at para 68). It is here that Justice Manson emphasizes the lack of evidence before him, noting that it is not clear what else Facebook could have reasonably been expected to do. In making this point, he states:

There is no expert evidence as to what Facebook could feasibly do differently, nor is there any subjective evidence from Facebook users about their expectations of privacy or evidence that any user did not appreciate the privacy issues at stake when using Facebook. While such evidence may not be strictly necessary, it would have certainly enabled the Court to better assess the reasonableness of meaningful consent in an area where the standard for reasonableness and user expectations may be especially context dependent and ever-evolving. (at para 71) [My emphasis].

This passage should be deeply troubling to those concerned about privacy. By referring to the reasonable expectation of privacy in terms of what users might expect in an ever-evolving technological context, Justice Manson appears to abandon the normative dimensions of the concept. His comments lead towards a conclusion that the reasonable expectation of privacy is an ever-diminishing benchmark as it becomes increasingly naïve to expect any sort of privacy in a data-hungry surveillance society. Yet this is not the case. The concept of the “reasonable expectation of privacy” has significant normative dimensions, as the Supreme Court of Canada reminds us in R. v. Tessling and in the case law that follows it. In Tessling, Justice Binnie noted that subjective expectations of privacy should not be used to undermine the privacy protections in s. 8 of the Charter, stating that “[e]xpectation of privacy is a normative rather than a descriptive standard.” Although this comment is made in relation to the Charter, a reasonable expectation of privacy that is based upon the constant and deliberate erosion of privacy would be equally meaningless in data protection law. Although Justice Manson’s comments about the expectation of privacy may not have affected the outcome of this case, they are troublesome in that they might be picked up by subsequent courts or by the Personal Information and Data Protection Tribunal proposed in Bill C-27.

The decision also contains at least two observations that should set off alarm bells with respect to Bill C-27, a bill to reform PIPEDA. Justice Manson engages in some discussion of the duty of an organization to safeguard information that it has disclosed to a third party. He finds that PIPEDA imposes obligations on organizations with respect to information in their possession, and information transferred for processing. In the case of prospective business transactions, an organization sharing information with a potential purchaser must enter into an agreement to protect that information. However, Justice Manson interprets this specific reference to a requirement for such an agreement to mean that “[i]f an organization were required to protect information transferred to third parties more generally under the safeguarding principle, this provision would be unnecessary” (at para 88). In Bill C-27, s. 39, for example, permits organizations to share de-identified (not anonymized) personal information with certain third parties without the knowledge or consent of individuals for ‘socially beneficial’ purposes without imposing any requirement to put in place contractual provisions to safeguard that information. The comments of Justice Manson clearly highlight the deficiencies of s. 39 which must be amended to include a requirement for such safeguards.

A second issue relates to the human-rights based approach to privacy which both the former Privacy Commissioner Daniel Therrien and the current Commissioner Philippe Dufresne have openly supported. Justice Manson acknowledges, that the Supreme Court of Canada has recognized the quasi-constitutional nature of data protection laws such as PIPEDA, because “the ability of individuals to control their personal information is intimately connected to their individual autonomy, dignity, and privacy” (at para 51). However, neither PIPEDA nor Bill C-27 take a human-rights based approach. Rather, they place personal and commercial interests in personal data on the same footing. Justice Manson states: “Ultimately, given the purpose of PIPEDA is to strike a balance between two competing interests, the Court must interpret it in a flexible, common sense and pragmatic manner” (at para 52). The government has made rather general references to privacy rights in the preamble of Bill C-27 (though not in any preamble to the proposed Consumer Privacy Protection Act) but has steadfastly refused to reference the broader human rights context of privacy in the text of the Bill itself. We are left with a purpose clause that acknowledges “the right of privacy of individuals with respect to their personal information” in a context in which “significant economic activity relies on the analysis, circulation and exchange of personal information”. The purpose clause finishes with a reference to the need of organizations to “collect, use or disclose personal information for purposes that a reasonable person would consider appropriate in the circumstances.” While this reference to the “reasonable person” should highlight the need for a normative approach to reasonable expectations as discussed above, the interpretive approach adopted by Justice Manson also makes clear the consequences of not adopting an explicit human-rights based approach. Privacy is thrown into a balance with commercial interests without fundamental human rights to provide a firm backstop.

Justice Manson seems to suggests that the Commissioner’s approach in this case may flow from frustration with the limits of PIPEDA. He describes the Commissioner’s submissions as “thoughtful pleas for well-thought-out and balanced legislation from Parliament that tackles the challenges raised by social media companies and the digital sharing of personal information, not an unprincipled interpretation from this Court of existing legislation that applies equally to a social media giant as it may apply to the local bank or car dealership.” (at para 90) They say that bad cases make bad law; but bad law might also make bad cases. The challenge is to ensure that Bill C-27 does not reproduce or amplify deficiencies in PIPEDA.

 

Published in Privacy

A recent decision of the Federal Court of Canada exposes the tensions between access to information and privacy in our data society. It also provides important insights into how reidentification risk should be assessed when government agencies or departments respond to requests for datasets with the potential to reveal personal information.

The case involved a challenge by two journalists to Health Canada’s refusal to disclose certain data elements in a dataset of persons permitted to grow medical marijuana for personal use under the licensing scheme that existed before the legalization of cannabis. [See journalist Molly Hayes’ report on the story here]. Health Canada had agreed to provide the first character of the Forward Sortation Area (FSA) of the postal codes of licensed premises but declined to provide the second and third characters or the names of the cities in which licensed production took place. At issue was whether these location data constituted “personal information” – which the government cannot disclose under s. 19(1) of the Access to Information Act (ATIA). A second issue was the degree of effort required of a government department or agency to maximize the release of information in a privacy-protective way. Essentially, this case is about “the appropriate analytical approach to measuring privacy risks in relation to the release of information from structured datasets that contain personal information” (at para 2).

The licensing scheme was available to those who wished to grow their own marijuana for medical purposes or to anyone seeking to be a “designated producer” for a person in need of medical marijuana. Part of the licence application required the disclosure of the medical condition that justified the use of medical marijuana. Where a personal supply of medical marijuana is grown at the user’s home, location information could easily be linked to that individual. Both parties agreed that the last three characters in a six-character postal code would make it too easy to identify individuals. The dispute concerned the first three characters – the FSA. The first character represents a postal district. For example, Ontario, Canada’s largest province, has five postal districts. The second character indicates whether an area within the district is urban or rural. The third character identifies either a “specific rural region, an entire medium-sized city, or a section of a major city” (at para 12). FSAs differ in size; StatCan data from 2016 indicated that populations in FSAs ranged from no inhabitants to over 130,000.

Information about medical marijuana and its production in a rapidly evolving public policy context is a subject in which there is a public interest. In fact, Health Canada proactively publishes some data on its own website regarding the production and use of medical marijuana. Yet, even where a government department or agency publishes data, members of the public can use the ATI system to request different or more specific data. This is what happened in this case.

In his decision, Justice Pentney emphasized that both access to information and the protection of privacy are fundamental rights. The right of access to government information, however, does not include a right to access the personal information of third parties. Personal information is defined in the ATIA as “information about an identifiable individual” (s. 3). This means that all that is required for information to be considered personal is that it can be used – alone or in combination with other information – to identify a specific individual. Justice Pentney reaffirmed that the test for personal information from Gordon v. Canada (Health) remains definitive. Information is personal information “where there is a serious possibility that an individual could be identified through the use of that information, alone or in combination with other available information.” (Gordon, at para 34, emphasis added). More recently, the Federal Court has defined a “serious possibility” as “a possibility that is greater than speculation or a ‘mere possibility', but does not need to reach the level of ‘more likely than not’” (Public Safety, at para 53).

Geographic information is strongly linked to reidentification. A street address is, in many cases, clearly personal information. However, city, town or even province of residence would only be personal information if it can be used in combination with other available data to link to a specific individual. In Gordon, the Federal Court upheld a decision to not release province of residence data for those who had suffered reported adverse drug reactions because these data could be combined with other available data (including obituary notices and even the observations of ‘nosy neighbors’) to identify specific individuals.

The Information Commissioner argued that to meet the ‘serious possibility’ test, Health Canada should be able to concretely demonstrate identifiability by connecting the dots between the data and specific individuals. Justice Pentney disagreed, noting that in the case before him, the expert opinion combined with evidence about other available data and the highly sensitive nature of the information at issue made proof of actual linkages unnecessary. However, he cautioned that “in future cases, the failure to engage in such an exercise might well tip the balance in favour of disclosure” (at para 133).

Justice Pentney also ruled that, because the proceeding before the Federal Court is a hearing de novo, he was not limited to considering the data that were available at the time of the ATIP request. A court can take into account data made available after the request and even after the decision of the Information Commissioner. This makes sense. The rapidly growing availability of new datasets as well as new tools for the analysis and dissemination of data demand a timelier assessment of identifiability. Nevertheless, any pending or possible future ATI requests would be irrelevant to assessing reidentification risk, since these would be hypothetical. Justice Pentney noted: “The fact that a more complete mosaic may be created by future releases is both true and irrelevant, because Health Canada has an ongoing obligation to assess the risks, and if at some future point it concludes that the accumulation of information released created a serious risk, it could refuse to disclose the information that tipped the balance” (at para 112).

The court ultimately agreed with Health Canada that disclosing anything beyond the first character of the FSA could lead to the identification of some individuals within the dataset, and thus would amount to personal information. Health Canada had identified three categories of other available data: data that it had proactively published on its own website; StatCan data about population counts and FSAs; and publicly available data that included data released in response to previous ATIP requests relating to medical marijuana. In this latter category the court noted that there had been a considerable number of prior requests that provided various categories of data, including “type of license, medical condition (with rare conditions removed), dosage, and the issue date of the licence” (at para 64). Other released data included the licensee’s “year of birth, dosage, sex, medical condition (rare conditions removed), and province (city removed)” (at para 64). Once released, these data are in the public domain, and can contribute to a “mosaic effect” which allows data to be combined in ways that might ultimately identify specific individuals. Health Canada had provided evidence of an interactive map of Canada published on the internet that showed the licensing of medical marijuana by FSA between 2001 and 2007. Justice Pentney noted that “[a]n Edmonton Journal article about the interactive map provided a link to a database that allowed users to search by medical condition, postal code, doctor’s speciality, daily dosage, and allowed storage of marijuana” (at para 66). He stated: “the existence of evidence demonstrating that connections among disparate pieces of relevant information have previously been made and that the results have been made available to the public is a relevant consideration in applying the serious possibility test” (at para 109). Justice Pentney observed that members of the public might already have knowledge (such as the age, gender or address) of persons they know who consume marijuana that they might combine with other released data to learn about the person’s underlying medical condition. Further, he notes that “the pattern of requests and the existence of the interactive map show a certain motivation to glean more information about the administration of the licensing regime” (at para 144).

Health Canada had commissioned Dr Khaled El Emam to produce and expert report. Dr. El Emam determined that “there are a number of FSAs that are high risk if either three or two characters of the FSA are released, there are no high-risk FSAs if only the first character is released” (at para 80). Relying on this evidence, Justice Pentney concluded that “releasing more than the first character of an FSA creates a significantly greater risk of reidentification” (at para 157). This risk would meet the “serious possibility” threshold, and therefore the information amounts to “personal information” and cannot be disclosed under the legislation.

The Information Commissioner raised issues about the quality of other available data, suggesting that incomplete and outdated datasets would be less likely to create reidentification risk. For example, since cannabis laws had changed, there are now many more people cultivating marijuana for personal use. This would make it harder to connect the knowledge that a particular person was cultivating marijuana with other data that might lead to the disclosure of a medical condition. Justice Pentney was unconvinced since the quantities of marijuana required for ongoing medical use might exceed the general personal use amounts, and thus would still require a licence, creating continuity in the medical cannabis licensing data before and after the legalization of cannabis. He noted: “The key point is not that the data is statistically comparable for the purposes of scientific or social science research. Rather, the question is whether there is a significant possibility that this data can be combined to identify particular individuals.” (at para 118) Justice Pentney therefore distinguishes between the issue of data quality from a data science perspective and data quality from the perspective of someone seeking to identify specific individuals. He stated: “the fact that the datasets may not be exactly comparable might be a problem for a statistician or social scientist, but it is not an impediment to a motivated user seeking to identify a person who was licensed for personal production or a designated producer under the medical marijuana licensing regime” (at para 119).

Justice Pentney emphasized the relationship between sensitivity of information and reidentification risk, noting that “the type of personal information in question is a central concern for this type of analysis” (at para 107). This is because “the disclosure of some particularly sensitive types of personal information can be expected to have particularly devastating consequences” (at para 107). With highly sensitive information, it is important to reduce reidentification risk, which means limiting disclosure “as much as is feasible” (at para 108).

Justice Pentney also dealt with a further argument that Health Canada should not be able to apply the same risk assessment to all the FSA data; rather, it should assess reidentification risk based on the size of the area identified by the different FSA characters. The legislation allows for severance of information from disclosed records, and the journalists argued that Health Canada could have used severance to reduce the risk of reidentification while releasing more data where the risks were acceptably low. Health Canada responded that to do a more fine-grained analysis of the reidentification risk by FSA would impose an undue burden because of the complexity of the task. In its submissions as intervenor in the case, the Office of the Privacy Commissioner suggested that other techniques could be used to perturb the data so as to significantly lower the risk of reidentification. Such techniques are used, for example, where data are anonymized.

Justice Pentney noted that the effort required by a government department or agency was a matter of proportionality. Here, the data at issue were highly sensitive. The already-disclosed first character of the FSA provided general location information about the licences. Given these facts, “[t]he question is whether a further narrowing of the lens would bring significant benefits, given the effort that doing so would require” (at para 181). He concluded that it would not, noting the lack of in-house expertise at Health Canada to carry out such a complex task. Regarding the suggestion of the Privacy Commissioner that anonymization techniques should be applied, he found that while this is not precluded by the ATIA, it was a complex task that, on the facts before him, went beyond what the law requires in terms of severance.

This is an interesting and important decision. First, it reaffirms the test for ‘personal information’ in a more complex data society context than the earlier jurisprudence. Second, it makes clear that the sensitivity of the information at issue is a crucial factor that will influence an assessment not just of the reidentification risk, but of tolerance for the level of risk involved. This is entirely appropriate. Not only is personal health information highly sensitive, at the time these data were collected, licensing was an important means of gaining access to medical marijuana for people suffering from serious and ongoing medical issues. Their sharing of data with the government was driven by their need and vulnerability. Failure to robustly protect these data would enhance vulnerability. The decision also clarifies the evidentiary burden on government to demonstrate reidentification risk – something that will vary according to the sensitivity of the data. It highlights the dynamic and iterative nature of reidentification risk assessment as the risk will change as more data are made available.

Indirectly, the decision also casts light on the challenges of using the ATI system to access data and perhaps a need to overhaul that system to provide better access to high-quality public-sector information for research and other purposes. Although Health Canada has engaged in proactive disclosure (interestingly, such disclosures were a factor in assessing the ‘other available data’ that could lead to reidentification in this case), more should be done by governments (both federal and provincial) to support and ensure proactive disclosure that better meets the needs of data users while properly protecting privacy. Done properly, this would require an investment in capacity and infrastructure, as well as legislative reform.

Published in Privacy

This is the second in a series of posts on Bill C-27’s proposed Artificial Intelligence and Data Act (AIDA). The first post looked at the scope of application of the AIDA. This post considers what activities and what data will be subject to governance.

Bill C-27’s proposed Artificial Intelligence and Data Act (AIDA) governs two categories of “regulated activity” so long as they are carried out “in the course of international or interprovincial trade and commerce”. These are set out in s. 5(1):

(a) processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system;

(b) designing, developing or making available for use an artificial intelligence system or managing its operations.

These activities are cast in broad terms, capturing activities related both to the general curating of the data that fuel AI, and the design, development, distribution and management of AI systems. The obligations in the statute do not apply universally to all engaged in the AI industry. Instead, different obligations apply to those performing different roles. The chart below identifies the actor in the left-hand column, and the obligation the column on the right.

 

Actor

Obligation

A person who carries out any regulated activity and who processes or makes available for use anonymized data in the course of that activity

(see definition of “regulated activity” in s. 5(1)

s. 6 (data anonymization, use and management)

s. 10 (record keeping regarding measures taken under s. 6)

A person who is responsible for an artificial intelligence system (see definition of ‘person responsible’ in s. 5(2)

s. 7 (assess whether a system is high impact)

s. 10 (record keeping regarding reasons supporting their assessment of whether the system is high-impact under s. 7)

A person who is responsible for a high-impact system (see definition of ‘person responsible’ in s. 5(2; definition of “high-impact” system, s. 5(1))

s. 8 (measures to identify, assess and mitigate risk of harm or biased output)

s. 9 (measures to monitor compliance with the mitigation measures established under s. 8 and the effectiveness of the measures

s. 10 (record keeping regarding measures taken under ss. 8 and 9)

s. 12 (obligation to notify the Minister as soon as feasible if the use of the system results or is likely to result in material harm)

A person who makes available for use a high-impact system

s. 11(1) (publish a plain language description of the system and other required information)

A person who manages the operation of a high-impact system

s. 11(2) (publish a plain language description of how the system is used and other required information)

 

For most of these provisions, the details of what is actually required by the identified actor will depend upon regulations that have yet to be drafted.

A “person responsible” for an AI system is defined in s. 5(2) of the AIDA in these terms:

5(2) For the purposes of this Part, a person is responsible for an artificial intelligence system, including a high-impact system, if, in the course of international or interprovincial trade and commerce, they design, develop or make available for use the artificial intelligence system or manage its operation.

Thus, the obligations in ss. 7, 8, 9, 10 and 11, apply only to those engaged in the activities described in s. 5(1)(b) (designing, developing or making available an AI system or managing its operation). Further, it is important to note that with the exception of sections 6 and 7, the obligations in the AIDA also apply only to ‘high impact’ systems. The definition of a high-impact system has been left to regulations and is as yet unknown.

Section 6 stands out somewhat as a distinct obligation relating to the governance of data used in AI systems. It applies to a person who carries out a regulated activity and who “processes or makes available for use anonymized data in the course of that activity”. Of course, the first part of the definition of a regulated activity includes someone who processes or makes available for use “any data relating to human activities for the purpose of designing, developing or using” an AI system. So, this obligation will apply to anyone “who processes or makes available for use anonymized data” (s. 6) in the course of “processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system” (s. 5(1)). Basically, then for s. 6 to apply, the anonymized data must be processed for the purposes of development of an AI system. All of this must also be in the course if international or interprovincial trade and commerce.

Note that the first of these two purposes involves data “related to human activities” that are used in AI. This is interesting. The new Consumer Privacy Protection Act (CPPA) that forms the first part of Bill C-27 will regulate the collection, use and disclosure of personal data in the course of commercial activity. However, it provides, in s. 6(5), that: “For greater certainty, this Act does not apply in respect of personal information that has been anonymized.” By using the phrase “data relating to human activities” instead of “personal data”, s. 5(1) of the AIDA clearly addresses human-derived data that fall outside the definition of personal information in the CPPA because of anonymization.

Superficially, at least, s. 6 of the AIDA appears to pick up the governance slack that arises where anonymized data are excluded from the scope of the CPPA. [See my post on this here]. However, for this to happen, the data have to be used in relation to an “AI system”, as defined in the legislation. Not all anonymized data will be used in this way, and much will depend on how the definition of an AI system is interpreted. Beyond that, the AIDA only applies to a ‘regulated activity’ which is one carried out in the course of international and inter-provincial trade and commerce. It does not apply outside the trade and commerce context, nor does it apply to any excluded actors [as discussed in my previous post here]. As a result, there remain clear gaps in the governance of anonymized data. Some of those gaps might (eventually) be filled by provincial governments, and by the federal government with respect to public-sector data usage. Other gaps – e.g., with respect to anonymized data used for purposes other than AI in the private sector context – will remain. Further, governance and oversight under the proposed CPPA will be by the Privacy Commissioner of Canada, an independent agent of Parliament. Governance under the AIDA (as will be discussed in a forthcoming post) is by the Minister of Industry and his staff, who are also responsible for supporting the AI industry in Canada. Basically, the treatment of anonymized data between the CPPA and the AIDA creates a significant governance gap in terms of scope, substance and process.

On the issue of definitions, it is worth making a small side-trip into ‘personal information’. The definition of ‘personal information’ in the AIDA provides that the term “has the meaning assigned by subsections 2(1) and (3) of the Consumer Privacy Protection Act.” Section 2(1) is pretty straightforward – it defines “personal information” as “information about an identifiable individual”. However, s. 2(3) is more complicated. It provides:

2(3) For the purposes of this Act, other than sections 20 and 21, subsections 22(1) and 39(1), sections 55 and 56, subsection 63(1) and sections 71, 72, 74, 75 and 116, personal information that has been de-identified is considered to be personal information.

The default rule for ‘de-identified’ personal information is that it is still personal information. However, the CPPA distinguishes between ‘de-identified’ (pseudonymized) data and anonymized data. Nevertheless, for certain purposes under the CPPA – set out in s. 2(3) – de-identified personal information is not personal information. This excruciatingly-worded limit on the meaning of ‘personal information’ is ported into the AIDA, even though the statutory provisions referenced in s. 2(3) are neither part of AIDA nor particularly relevant to it. Since the legislator is presumed not to be daft, then this must mean that some of these circumstances are relevant to the AIDA. It is just not clear how. The term “personal information” is used most significantly in the AIDA in the s. 38 offense of possessing or making use of illegally obtained personal information. It is hard to see why it would be relevant to add the CPPA s. 2(3) limit on the meaning of ‘personal information’ to this offence. If de-identified (not anonymized) personal data (from which individuals can be re-identified) are illegally obtained and then used in AI, it is hard to see why that should not also be captured by the offence.

 

Published in Privacy

Privacy is a human right. It is recognized the United Nations Declaration of Human Rights and other international human rights instruments. In Canada, the Supreme Court of Canada has interpreted the. 8 Charter right to be secure against unreasonable search or seizure as a privacy right, and it has also found that data protection laws in Canada have ‘quasi-constitutional’ status because of the importance of the privacy rights on which they are premised. The nature of privacy as a human right should not be a controversial proposition, but it became so in Bill C-11, the 2020 Bill to reform the Personal Information Protection and Electronic Documents Act (PIPEDA). Bill C-11 did not address the human rights dimensions of data protection, and it was soundly criticized by the former Privacy Commissioner of Canada for failing to do so. Bill C-27, which contains the new PIPEDA reform bill, and which was introduced in June 2022, gives a nod to the human rights dimensions of data protection. This post will consider whether this is enough.

There are several reasons why the human rights dimensions of data protection law became such an issue in Canada. Data protection laws balance the privacy rights of individuals with the needs of organizations and governments to collect and use personal information for a range of purposes. If a balance is to be struck between two things, the weight given to considerations on either side of the scale must be appropriate. Recognizing the human rights dimensions of the protection of personal data gives added weight to the interests of individuals (and communities) by acknowledging the importance that control over personal data has to the exercise of a variety of human rights (including, but not limited to, dignity, autonomy and freedom from discrimination). It also acknowledges the substantial threats that the data economy can pose to human rights. Second, the EU’s General Data Protection Regulation puts the human rights dimensions of privacy and data protection front and centre. Once this has been done across the EU, the omission of a similar approach from draft legislation in Canada takes on greater significance. It starts to look like a deliberate statement. Third, Quebec takes an explicit human-rights based approach to privacy, making it – well, awkward – to have a less human rights-forward standard crafted for the rest of Canada. In Ontario, a government White Paper considering a private sector data protection law for Ontario explicitly endorsed a human rights-based approach.

The federal government’s hesitation to address the human rights dimensions of privacy is rooted in its anxiety over the constitutional footing for a federal private sector data protection law. PIPEDA has been constitutionally justified under the federal government’s general trade and commerce power. This means that it is enacted to regulate an aspect of trade and commerce at the national level. PIPEDA focuses on data collected, used, and disclosed by the private sector in the course of commercial activity. The government’s concern is that adopting a human rights-based approach would transform the statute from one that addresses the management of personal data in the commercial context to one that governs human rights as they relate to personal data. Constitutional anxiety is evident even in the new name of the future data protection law: The Consumer Privacy Protection Act [my emphasis].

The former Privacy Commissioner of Canada, Daniel Therrien, commissioned a legal opinion on the issues of constitutionality linked to adopting a human rights-based approach. This opinion found that the legislation could support such an approach within the general trade and commerce framework. The federal government clearly takes a different view, which may be rooted in an almost pathological division-of-powers anxiety. After all, this government also refused to defend the constitutional challenge to the Genetic Non-Discrimination Act, even though the constitutionality of that statute (which began its life as a private-member’s bill) was ultimately upheld by a majority of the Supreme Court of Canada.

One of the changes in Bill C-27 from Bill C-11 is the addition of a preamble. It is in this preamble that the government now makes reference to the human rights basis for privacy. The preamble also enumerates other considerations, making it clear that the interests (or rights) of individuals are just one factor in a rather complex balance. The other factors include the importance of trade and free flows of data, the need to support and foster the data-driven economy, the need for an agile regulatory framework, the need to not unduly burden small businesses, the need for harmonization, and the importance of facilitating data collection and use in the public interest.

The clauses in the preamble that address privacy and human rights include an acknowledgement that the protection of personal information is essential to the autonomy and dignity of individuals and to their full enjoyment of their fundamental rights and freedoms in Canada. This is probably the strongest statement and it is near the top of the list. There is also an acknowledgement of the importance of privacy and data protection principles found in international instruments. There are some references to human rights in relation to AI, but those relate to the Artificial Intelligence and Data Act that is part of this Bill. There is also a closing paragraph which refers to bolstering the digital and data-driven economy by establishing a regulatory framework “that supports and protects Canadian norms and values, including the right to privacy”. At best, however, this just emphasizes that the right to privacy is one factor in the balance – and not necessarily the predominant one. The government has been reasonably explicit in the preamble about the range of competing public policy considerations that feed into their data protection bill. The overall message is: “Yes, privacy is a human right, but we’re trying to do something here.”

Bill C-27 also includes the text of a proposed Artificial Intelligence and Data Act (AIDA). This statute is arguably the government’s attempt to address human rights in the AI and data context, in that it contains measures meant to address discriminatory bias in AI (which is fueled by data). It is meant to apply to ‘high impact’ systems (not defined in the Bill), although impact certainly seems to be understood in terms of harms to individuals. Next week my series of posts will begin to consider the AIDA in more detail. For present purposes, however, consider that the AIDA will only apply to systems defined as ‘high impact’; it addresses only individual and not group harms; it will apply only in the context of AI (whereas data are used in many more contexts); and many organisations and institutions are excluded from its scope. In any event, while the proper governance of AI is of great importance, so is the proper governance of personal data, which is the domain of data protection legislation. The AIDA is therefore not an answer to concerns over the need for a human rights-based approach to data protection.

I have argued for a human rights-based approach to privacy in data protection law. The volumes of data collected, the way these data are used and shared, and the potential impacts they can have on peoples’ lives all suggest that we can no longer mince words when it comes to understanding the significance of data protection. Technology now reduces just about anything to streams of data, and those data are used to profile, categorize, assess, and monitor individuals. They are used in tools of surveillance and control. Although we talk the talk of individual consent and control, such liberal fictions are no longer sufficient to provide the protection needed to ensure that individuals and the communities to which they belong are not exploited through the data harvested from them. This is why acknowledging the role that data protection law plays in protecting human rights, autonomy and dignity is so important. This is why the human rights dimension of privacy should not just be a ‘factor’ to take into account alongside stimulating innovation and lowering the regulatory burden on industry. It is the starting point and the baseline. Innovation is good, but it cannot be at the expense of human rights.

In Canada we have relied upon the normative idea in s. 5(3) of PIPEDA that any collection, use or disclosure of personal information must be “for purposes that a reasonable person would consider are appropriate in the circumstances”. This normative concept is also found in s. 12(1) of Bill C-27. Although past privacy commissioners have given substance to this provision, the concern remains that without an anchor in an explicitly human rights-based approach, the ‘reasonable person’ might, over time, be interpreted to be more excited about the potential of data to boost the economy than concerned about the adverse effects its use might have on certain individuals or groups. Given that Bill C-27 will shift interpretive authority over key concepts in the legislation from the Privacy Commissioner to the mysterious Data Tribunal, this normative wiggle-room is particularly concerning.

In spite of this, the addition of a preamble to Bill C-27, with its references to privacy and human rights is probably all that we are going to get from this government on this issue. There is not much interest in going back to the drawing board with this Bill, and the government is no doubt impatient to move the data protection law reform file forward.

In the meantime, it is worth noting that the provinces remain free to enact and/or amend their own private sector data protection laws, and to make strong statements about a human-rights-basis for data protection. The laws in Alberta and British Columbia will be reformed once a new federal bill is passed. And, with a newly re-elected government, Ontario might once again turn its attention to crafting its own law. There are other fronts on which this battle can be fought, and perhaps it is best to turn attention to these.

 

Published in Privacy
Monday, 25 July 2022 06:34

Bill C-27 and Children’s Privacy

Note: This is the fifth in a series of posts on Canada's Bill C-27 which, among other things, will reform Canada's private sector data protection law.

Bill C-27, the bill to amend Canada’s existing private sector data protection law, gives particular attention to the privacy rights of minors in a few instances. This is different from the current law, and it is a change since the previous (failed) reform bill, Bill C-11. The additions to Bill C-27 respond to concerns raised by privacy advocates and scholars regarding Bill C-11’s silence on children’s privacy.

Directly addressing children’s privacy has been a bit of a struggle for this government, which seems particularly sensitive to federal-provincial division of powers issues. After all, it is the provinces that get to determine the age of majority. A private sector data protection law that defined a child in terms of a particular age range for the purposes of consent, for example, might raise constitutional hackles. Further, many of the privacy issues that concern parents the most are ones that fall at least to some extent within provincial jurisdiction. Consider the issues around children’s privacy and educational technologies used in schools. While many of those technologies are sourced from the private sector, the schools themselves are subject to provincial public sector data protection laws, and so, the schools’ adoption and use of these technologies is governed by provincial legislation. That said, children still spend a great deal of time online; their toys are increasingly connected to the Internet of Things; their devices and accompanying apps capture and transmit all manner of data; and they, their parents and friends post innumerable pictures, videos and anecdotes about them online. Children have a clear interest in private sector data protection.

The government’s modest response to concerns about children’s privacy in Bill C-27 no doubt reflects this constitutional anxiety. The most significant provision is found in s. 2(2), which states that “For the purposes of this Act, the personal information of minors is considered to be sensitive information.” Note that the reference is to ‘minors’ and not ‘children’, and no attempt is made to define the age of majority.

If you search Bill C-27 for further references to minors, you will find few. Two important ones are found in s. 55, which deals with the right of erasure. This right, which allows an individual to request the deletion of their data, has number of significant exceptions to it. However, two of these exceptions do not apply in the case of the data of minors (see my post on the right of erasure). The first of these allows an organization to deny a request for erasure if “the disposal of the information would have an undue adverse impact on the accuracy or integrity of information that is necessary to the ongoing provision of a product or service to the individual in question”. The second allows an organization to deny a request for deletion if the data is subject to a data retention policy. Neither exception to the right of erasure applies in the case of the data of minors. This is important as it will allow minors (or those acting on their behalf) to obtain deletion of data – even outside the organization’s regular disposal schedule.

The Personal Information Protection and Electronic Documents Act currently links valid consent to a person’s capacity to understand “the nature, purpose and consequences of the collection, use or disclosure of the personal information to which they are consenting” (s. 6.1). Bill C-11 would have eliminated this requirement for valid consent. Responding to criticisms, the government in Bill C-27, has added a requirement that consent must be sought “in plain language that an individual to whom the organization’s activities are directed would reasonably be expected to understand.” (s. 15(4)) It is good to see this element returned to the reform bill, even if it is a little half-hearted compared to PIPEDA’s s. 6.1. In this regard, Bill C-27 is an improvement over C-11. (See my post on consent in Bill C-27).

Although no other provisions are specifically drafted for minors, per se, declaring that the personal information of minors is considered ‘sensitive’ is significant in a Bill that requires organizations to give particular attention to the sensitivity of personal data in a range of circumstances. For example, an organization’s overall privacy management program must take into account both the volume and sensitivity of the information that the organization collects (s. 9(2)). The core normative principle in the legislation, which limits the collection, use and disclosure of personal information to that which a reasonable person would consider appropriate in the circumstances also requires a consideration of the sensitivity of personal data (s. 12(2)(a)). In determining whether an organization can rely upon implied consent, the sensitivity of the information is a relevant factor (s. 15(5)). Organizations, in setting data retention limits, must take into account, among other things, the sensitivity of personal data (s. 53(2)), and they must provide transparency with respect to those retention periods (s. 62(2)(e)). The security safeguards developed for personal data must take into account its sensitivity (s. 57(1)). When there is a data breach, the obligation to report the breach to the Commissioner depends upon a real risk of significant harm – one of the factors in assessing such a risk is the sensitivity of the personal data (s. 58(8)). When data are de-identified, the measures used for de-identification must take into account the sensitivity of the data, and the Commissioner, in exercising his powers, duties or functions must also consider the sensitivity of the personal data dealt with by an organization (s. 109).

The characterization of the data of minors as ‘sensitive’ means that the personal data of children – no matter what it is – will be treated as sensitive data in the interpretation and application of the law. In practical terms, this is not new. The Office of the Privacy Commissioner has consistently treated the personal data of children as sensitive. However, it does not hurt to make this approach explicit in the law. In addition, the right of erasure for minors is an improvement over both PIPEDA and Bill C-11. Overall, then, Bill C-27 offers some enhancement to the data protection rights of minors.

Published in Privacy

Bill C-27, which will amend Canada’s private sector data protection law, contains a right of erasure. In its basic form, this right allows individuals to ask an organization to dispose of the personal information it holds about them. It is sometimes referred to as the right to be forgotten, although the right to be forgotten has different dimensions that are not addressed in Bill C-27. Bill C-27’s predecessor, Bill C-11, had proposed a right of erasure in fairly guarded terms: individuals would be able to request the disposal only of information that the organization had obtained from the individual. This right would not have extended to information the organization had collected through other means – by acquiring that information from other organizations, scraping it from the internet, or even creating it through profiling algorithms. Section 55 of Bill C-27 (“disposal at individual’s request”) brings some interesting changes to this limitation. Significantly, it extends the right of erasure to the individual’s personal information that “is under the organization’s control”. Nevertheless, in doing so, it also adds some notable restrictions.

First, Bill C-27’s right of erasure will only apply in three circumstances. The first, set out in s. 55(1)(a), is where the information was collected, used or disclosed in contravention of the Act. Basically, if an organization had no right to have or use the personal data in the first place, it must dispose of the information at the request of the individual.

The second situation, set out in s. 55(1)(b), is where an individual has withdrawn their consent to the collection, use or disclosure of the information held by the organization. Perhaps a person agreed to allow an organization to collect certain data in addition to the data considered necessary to providing a particular product or service. If that person decides they no longer want the organization to collect this additional data, not only can they withdraw consent to its continued collection, they can exercise their right to erasure and have the already-collected data deleted.

Finally, s. 55(1)(c) allows an individual to request deletion of personal data where the information is no longer necessary for the continued provision of a product or service requested by the individual. If an individual ceases to do business with an organization, for example, and does not wish the organization to retain their personal information, they can request its deletion. Here, the expansion of the right to include all personal information under the organization’s control can be important. For example, if you terminate your contract with a streaming service, you could request deletion not just of the customer data you provided to them, and your viewing history, but also the organization’s inexplicable profile of you as someone who loves zombie movies.

Where an organization has acceded to a request for disposal of personal data, it is also obliged, under s. 55(4), to inform “any service provider” to which it has transferred the data to dispose of them. The organization is responsible for ensuring this takes place. Note, however, that the obligation is only to inform any service provider, defined in the bill as an entity that “provides services for or on behalf” of the organization to assist it in fulfilling its purposes. The obligation to notify does not extend to those to whom the data may have been sold.

There are, however, important exceptions to this expanded right of erasure. Subsection 55(2) would allow an organization to refuse to dispose of data under s. 55(1)(b) or (c) in circumstances where it is inseparable from the personal data of another person (for example, that embarrassing photo of you partying with others that someone else posted online); other legal requirements require the organization to retain the information; or the organization requires the data for a legal defence or legal remedy.

A few other exceptions are potentially more problematic. Paragraph 55(2)(d) creates an exception to the right of erasure where:

(d) the information is not in relation to a minor and the disposal of the information would have an undue adverse impact on the accuracy or integrity of information that is necessary to the ongoing provision of a product or service to the individual in question;

For example, this might apply in the case where an individual remains in a commercial relationship with an organization, but has withdrawn consent to a particular use or disclosure of their data and has requested its deletion. If the organization believes that deleting the information would adversely affect the integrity of the product or service they continue to provide to the individual, they can refuse deletion. It will be interesting to see how this plays out. There may be a matter of opinion about the impacts on the integrity of the product or service being supplied. If an individual finds an organization’s recommendation service based on past purchases or views to be largely useless, seeking deletion of data about their viewing history will not impact the integrity of the service from the individual’s point of view – but the organization might have a different opinion.

In Bill C-27, the government responded to criticisms that its predecessor, Bill C-11, did nothing to specifically deal with children’s privacy. Bill C-27 addresses the privacy of minors in specific instances, and the right of erasure is one of them. Interestingly, the right of erasure prevails under s. 55(2)(d) for minors, presumably even when the erasure would have an “undue adverse impact on the accuracy or integrity of information that is necessary to the ongoing provision of a product or service”. It seems that minors will get to choose between deletion and adverse impacts, while those over the age of majority will have to put up with retention and uses of their personal data to which they object.

Another exception to the right also applies only to those past the age of majority. Paragraph 55(2)(f) provides that an organization may refuse a request for disposal of personal information if:

(f) the information is not in relation to a minor and it is scheduled to be disposed of in accordance with the organization’s information retention policy, and the organization informs the individual of the remaining period of time for which the information will be retained.

What this means is that if an organization has a retention policy that conforms to s. 53 of Bill C-27 (one that provides for the destruction of personal information once it is no longer necessary for the purposes for which it was collected, used or disclosed), then it can refuse a request for erasure – unless, of course, it is a minor who requests erasure. In that case, they must act in advance of the normal disposal schedule. This provision was no doubt added to save organizations from the burden of having to constantly respond to requests for erasure of personal data. For large swathes of personal data, for example, they can prepare a standard response that informs a requestor of their retention policy and provides the timetable on which the data will be deleted once it is no longer necessary to fulfill the purposes for which it was collected. If this provision can also be relied upon when an individual ceases to do business with an organization and requests the deletion of their information, then the right of erasure in Bill C-27 will become effectively useless in the case of any company with a data retention policy. Except, of course, for minors.

Finally, organizations will be given the right to refuse to consider requests for deletion that are “vexatious or made in bad faith”. Let’s hit pause here. This exception is to protect commercial entities against data subjects. I understand that organizations do not want to be subject to mass campaigns for data deletion– or serial requests by individuals – that overwhelm them. That might happen. However, the standard form email that will be part of the ‘regular deletion schedule’ exception discussed above will largely suffice to address this problem. Organizations now have enormous abilities to collect massive amounts of personal data and to use these data for a wide variety of purposes. Many do this responsibly, but there are endless examples of overcollection, over-retention, excessive sharing, poor security, and outright abuses of personal data. The right of erasure is a new right for individuals to help them exercise greater control over their personal data in a context in which such data are often flagrantly misused. To limit this right based on what an organization considers vexatious is a demonstration of how the balance in Bill C-27 leans towards the free flow and use of personal data rather than the protection of privacy.

It is important to note that there is yet another limit on the right of erasure, which is found in Bill C-27’s definition of ‘dispose’. According to this definition, dispose means “to permanently and irreversibly delete personal information or to anonymize it. Thus, an organization can choose to anonymize personal data, and once it has done so, the right of erasure is not available. (See my post on anonymized and deidentified data for what ‘anonymized’ means). Section 2(3) of Bill C-27 also removes the right of erasure where information is merely de-identified (pseudonymized). This seems like an internal contradiction in the legislation. Disposal means deletion or rigorous anonymization – but, under s. 2(3), a company can just pseudonymize to avoid a request for disposal. The difference seems to be that pseudonymized data may still eventually need to be disposed of under data retention limits, whereas anonymized data can be kept forever.

All told, as a right that is meant to give more control to individuals, the right of erasure in Bill C-27 is a bit of a bust. Although it allows an individual to ask an organization to delete data (and not just data that the individual provided), the right is countered by a great many bases on which the organization can avoid it. It’s a bit of a ‘Canadian compromise’ (one of the ones in which Canadians get compromised): individuals get a new right; organizations get to side-step it.

 

 

Published in Privacy

[Note: This is my third in a series of posts on the new Bill C-27 which will reform private sector data protection law in Canada and which will add a new Artificial Intelligence and Data Act. The previous two posts addressed consent and de-identification/anonymization.]

In 2018 a furore erupted over media reports that Statistics Canada (StatCan) sought to collect the financial data of a half a million Canadians from Canadian banks to generate statistical data. Reports also revealed that it had already collected a substantial volume of personal financial data from credit agencies. The revelations led to complaints to the Privacy Commissioner, who carried out an investigation and issued an interim and a final report. One outcome was that StatCan worked with the Office of the Privacy Commissioner of Canada to develop a new approach to the collection of such data. Much more recently, there were expressions of public outrage when media reported that the Public Health Agency of Canada (PHAC) had acquired de-identified mobility data about Canadians from Telus in order to inform their response to the COVID-19 pandemic. This led to hearings before the ETHI Standing Committee of the House of Commons, and resulted in a report with a series of recommendations.

Both of these instances involved attempts by government institutions or agencies to make use of existing private sector data to enhance their analyses or decision-making. Good policy is built on good data; we should support and encourage the responsible use of data by government in its decision-making. At the same time, however, there is clearly a deep vein of public distrust in government – particularly when it comes to personal data – that cannot be ignored. Addressing this distrust requires both transparency and strong protection for privacy.

Bill C-27, introduced in Parliament in June 2022, proposes a new Consumer Privacy Protection Act to replace the aging Personal Information Protection and Electronic Documents Act (PIPEDA). As part of the reform, this private sector data protection bill contains provisions that are tailored to address the need of government – as well as the commercial data industry – to access personal data in the hands of the private sector.

Two provisions in C-27 are particularly relevant here: sections 35 and 39. Section 35 deals specifically with the sharing of private sector data for the purposes of statistics and research. Section 7(3)(f) of PIPEDA contains an exception that is similar to s. 35. Section 39 is entirely new. Section 39 deals with the use of data for “socially beneficial purposes”. Both s. 35 and s. 39 were in the predecessor to C-27, Bill C-11. Only section 35 has been changed since C-11 – a small change significantly broadens its scope.

Section 35 of Bill C-27 provides:

35 An organization may disclose an individual’s personal information without their knowledge or consent if

(a) the disclosure is made for statistical purposes or for study or research purposes and those purposes cannot be achieved without disclosing the information;

(b) it is impracticable to obtain consent; and

(c) the organization informs the Commissioner of the disclosure before the information is disclosed.

This provision would enable the kind of data sharing by the private sector that was involved in the StatCan example mentioned above, and that was previously enabled by s. 7(3)(f) of PIPEDA. As currently the case under PIPEDA, s. 35 would allow for the sharing of personal information without an individual’s knowledge or consent. It is important to note that there is no requirement that the personal information be de-identified or anonymized in any way (see my earlier post on de-identification and anonymization here). The remainder of s. 35 imposes the only limitations on such sharing. One of these relates to purpose. The sharing must be for “statistical purposes” (but note that StatCan is not the only organization that engages in statistical activities, and such sharing is not limited to StatCan). It can also be for “study or research purposes”. Bill C-11, like PIPEDA, had referred to “scholarly study or research purposes”. The removal of ‘scholarly’ substantially enlarges the scope of this provision (for example, market research and voter profile research would no doubt count). There is a further qualifier – the statistical, study, or research purposes have to be ones that “cannot be achieved without disclosing the information”. However, they do not have to be ‘socially beneficial’ (although there is an overarching provision in s. 5 that requires that the purposes for collecting, using or disclosing personal information be ones that a ‘reasonable person would consider appropriate in the circumstances’). Section 35(b) (as is the case under PIPEDA’s s. 7(3)(f)) also requires that it be impracticable to obtain consent. This is not really much of a barrier. If you want to use the data of a half a million individuals, for example, it is really not practical to seek their consent. Finally, the organization must inform the Commissioner of the disclosure prior to it taking place. This provides a thin film of transparency. Another nod and a wink to transparency is found in s. 62(2)(b), which requires organizations to provide a ‘general account’ of how they apply “the exceptions to the requirement to obtain an individual’s consent under this Act”.

Quebec’s Loi 25 also addresses the use of personal information in the hands of the private sector for statistical and research purposes without individual consent. Unlike Bill C-27, it contains more substantive guardrails:

21. A person carrying on an enterprise may communicate personal information without the consent of the persons concerned to a person or body wishing to use the information for study or research purposes or for the production of statistics.

The information may be communicated if a privacy impact assessment concludes that

(1) the objective of the study or research or of the production of statistics can be achieved only if the information is communicated in a form allowing the persons concerned to be identified;

(2) it is unreasonable to require the person or body to obtain the consent of the persons concerned;

(3) the objective of the study or research or of the production of statistics outweighs, with regard to the public interest, the impact of communicating and using the information on the privacy of the persons concerned;

(4) the personal information is used in such a manner as to ensure confidentiality; and

(5) only the necessary information is communicated.

The requirement of a privacy impact assessment (PIA) in Loi 25 is important, as is the condition that this assessment consider the goals of the research or statistical activity in relation to the public interest and to the impact on individuals. Loi 25 also contains important limitations on how much information is shared. Bill C-27 addresses none of these issues. At the very least, as is the case under Quebec law, there should be a requirement to conduct a PIA with similar considerations – and to share it with the Privacy Commissioner. Since this is data sharing without knowledge or consent, there could even be a requirement that the PIAs be made publicly available, with appropriate redactions if necessary.

Some might object that there is no need to incorporate these safeguards in the new private sector data protection law since those entities (such as StatCan) who receive the data have their own secure policies and practices in place to protect data. However, under s. 35 there is no restriction on who may receive data for statistical, study or research purposes, and no reason to assume that they have appropriate safeguards in place. If they do, then the PIA can reflect this.

Section 39 addresses the sharing of de-identified personal information for socially beneficial purposes. Presumably, this would be the provision under which, in the future, mobility data might be shared with an agency such as PHAC. Under s. 39:

39 (1) An organization may disclose an individual’s personal information without their knowledge or consent if

(a) the personal information is de-identified before the disclosure is made;

(b) the disclosure is made to

(i) a government institution or part of a government institution in Canada,

(ii) a health care institution, post-secondary educational institution or public library in Canada,

(iii) any organization that is mandated, under a federal or provincial law or by contract with a government institution or part of a government institution in Canada, to carry out a socially beneficial purpose, or

(iv) any other prescribed entity; and

(c) the disclosure is made for a socially beneficial purpose.

(2) For the purpose of this section, socially beneficial purpose means a purpose related to health, the provision or improvement of public amenities or infrastructure, the protection of the environment or any other prescribed purpose.

This provision requires that shared information must be de-identified, although as noted in my earlier post, de-identification in Bill C-27 no longer means what it did in C-11. The data shared may have only direct identifiers removed leaving individuals easily identifiable. The disclosure must be for socially beneficial purposes, and it must be to a specified or prescribed entity. I commented on the identical provision in C-11 here, so I will not repeat in detail those earlier concerns from that post. They remain unaddressed in Bill C-27. The most significant gap is the lack of a requirement for a data governance agreement to be in place between the parties based upon the kinds of considerations that would be relevant in a privacy impact assessment.

Where the sharing is to be with a federal government institution, the Privacy Act should provide additional protection. However, the Privacy Act is itself an antediluvian statute that has long been in need of reform. It is worth noting that while the doors to data sharing are opened in Bill C-27, many of the necessary safeguards – at least where government is concerned – are left for another statute in the hands of another department, and that lies who-knows-where in the government’s legislative agenda (although rumours are that we might see a Bill this fall [Warning: holding your breath could be harmful to your health.]). In its report on the sharing of mobility data with PHAC, ETHI calls for much greater transparency about data use on the part of the Government of Canada, and also calls for enhanced consultation with the Privacy Commissioner prior to engaging in this form of data collection. Apart from the fact that these pieces will not be in place – if at all – until the Privacy Act is reformed, the exceptions in sections 35 and 39 of C-27 apply to organizations and institutions outside the federal government, and thus, can involve institutions and entities not subject to the Privacy Act. Guardrails should be included in C-27 (as they are, for example, in Loi 25); yet, they are absent.

As noted earlier, there are sound reasons to facilitate the use of personal data to aid in data-driven decision-making that serves the public interest. However, any such use must protect individual privacy. Beyond this, there is also a collective privacy dimension to the sharing of even anonymized human-derived data. This should also not be ignored. It requires greater transparency and public engagement, along with appropriate oversight by the Privacy Commissioner. Bill C-27 facilitates use without adequately protecting privacy – collective or individual. Given the already evident lack of trust in government, this seems either tone-deaf or deeply cynical.

 

 

 

 

 

 

 

Published in Privacy
<< Start < Prev 1 2 3 4 5 6 7 Next > End >>
Page 1 of 7

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law