Teresa Scassa - Blog

Teresa Scassa

Teresa Scassa

The Artificial Intelligence and Data Act (AIDA) in Bill C-27 will create new obligations for those responsible for AI systems (particularly high impact systems), as well as those who process or make available anonymized data for use in AI systems. In any regulatory scheme that imposes obligations, oversight and enforcement are key issues. A long-standing critique of the Personal Information Protection and Electronic Documents Act (PIPEDA) has been that it is relatively toothless. This is addressed in the first part of Bill C-27, which reforms the data protection law to provide a suite of new enforcement powers that include order-making powers for the Privacy Commissioner and the ability to impose stiff administrative monetary penalties (AMPs). The AIDA comes with ‘teeth’ as well, although these teeth seem set within a rather fragile jaw. I will begin by identifying the oversight and enforcement powers (the teeth) and will then look at the agent of oversight and enforcement (the jaw). The table below sets out the main obligations accompanied by specific compliance measures. There is also the possibility that any breach of these obligations might be treated as either a violation or offence, although the details of these require elaboration in as-yet-to-be-drafted regulations.

 

Obligation

Oversight Power

To keep records regarding the manner in which data is anonymized and the use or management of anonymized data as well as records of assessment of whether an AI system is high risk (s. 10)

Minister may order the record-keeper to provide any of these records (s. 13(1))

 

 

Any record-keeping obligations imposed on any actor in as-yet undrafted regulations

Where there are reasonable grounds to believe that the use of a high impact system could result in harm or biased output, the Minister can order the specified person to provide these records (s. 14)

Obligation to comply with any of the requirements in ss. 6-12, or any order made under s. 13-14

Minister (on reasonable grounds to believe there has a contravention) can require the person to conduct either an internal or an external audit with respect to the possible contravention (s. 15); the audit must be provided to the Minister

 

A person who has been audited may be ordered by the Minister to implement any measure specified in the order, or to address any matter in the audit report (s. 16)

Obligation to cease using or making available a high-impact system that creates a serious risk of imminent harm

Minister may order a person responsible for a high-impact system to cease using it or making it available for use if the Minister has reasonable grounds to believe that its use gives rise to a serious risk of imminent harm (s. 17)

Transparency requirement (any person referred to in sections 6 to 12, 15 and 16)

Minister may order the person to publish on a publicly available website any information related to any of these sections of the AIDA, but there is an exception for confidential business information (s. 18)

 

Compliance with orders made by the Minister is mandatory (s. 19) and there is a procedure for them to become enforceable as orders of the Federal Court.

Although the Minister is subject to confidentiality requirements, they may disclose any information they obtain through the exercise of the above powers to certain entities if they have reasonable grounds to believe that a person carrying out a regulated activity “has contravened, or is likely to contravene, another Act of Parliament or a provincial legislature” (s. 26(1)). Those entities include the Privacy Commissioner, the Canadian Human Rights Commission, the Commissioner of Competition, the Canadian Radio-television and Telecommunications Commission, their provincial analogues, or any other person prescribed by regulation. An organization may therefore be in violation of statutes other than AIDA and may be subject to investigation and penalties under those laws.

The AIDA itself provides no mechanism for individuals to file complaints regarding any harms they may believe they have suffered, nor is there any provision for the investigation of complaints.

The AIDA sets up the Minister as the actor responsible for oversight and enforcement, but the Minister may delegate any or all of their oversight powers to the new Artificial Intelligence and Data Commissioner who is created by s. 33. The Data Commissioner is described in the AIDA as “a senior official of the department over which the Minister presides”. They are not remotely independent. Their role is “to assist the Minister” responsible for the AIDA (most likely the Minister of Industry), and they will also therefore work in the Ministry responsible for supporting the Canadian AI industry. There is essentially no real regulator under the AIDA. Instead, oversight and enforcement are provided by the same group that drafted the law and that will draft the regulations. It is not a great look, and, certainly goes against the advice of the OECD on AI governance, as Mardi Wentzel has pointed out.

The role of Data Commissioner had been first floated in the 2019 Mandate Letter to the Minister of Industry, which provided that the Minister would: “create new regulations for large digital companies to better protect people’s personal data and encourage greater competition in the digital marketplace. A newly created Data Commissioner will oversee those regulations.” The 2021 Federal Budget provided funding for the Data Commissioner, and referred to the role of this Commissioner as to “inform government and business approaches to data-driven issues to help protect people’s personal data and to encourage innovation in the digital marketplace.” In comparison with these somewhat grander ideas, the new AI and Data Commissioner role is – well – smaller than the title. It is a bit like telling your kids you’re getting them a deluxe bouncy castle for their birthday party and then on the big day tossing a couple of couch cushions on the floor instead.

To perhaps add a gloss of some ‘independent’ input into the administration of the statute, the AIDA provides for the creation of an advisory committee (s. 35) that will provide the Minister with “advice on any matters related to this Part”. However, this too is a bit of a throwaway. Neither the AIDA nor any anticipated regulations will provide for any particular composition of the advisory committee, for the appointment of a chair with a fixed term, or for any reports by the committee on its advice or activities. It is the Minister who may choose to publish advice he receives from the committee on a publicly available website (s. 35(2)).

The AIDA also provides for enforcement, which can take one of two routes. Well, one of three routes. One route is to do nothing – after all, the Minister is also responsible for supporting the AI industry in Canada– so this cannot be ruled out. A second option will be to treat a breach of any of the obligations specified in the as-yet undrafted regulations as a “violation” and impose an administrative monetary penalty (AMP). A third option is to treat a breach as an “offence” and proceed by way of prosecution (s. 30). A choice must be made between proceeding via the AMP or the offense route (s. 29(3)). Providing false information and obstruction are distinct offences (s. 30(2)). There are also separate offences in ss. 38 and 39 relating to the use of illegally obtained data and knowingly or recklessly making an AI system available for use that is likely to cause harm.

Administrative monetary penalties under Part 1 of Bill C-27 (relating to data protection) are quite steep. However, the necessary details regarding the AMPs that will be available for breach of the AIDA are to be set out in regulations that have yet to be drafted (s. 29(4)(d)). All that the AIDA really tells us about these AMPs is that their purpose is “to promote compliance with this Part and not to punish” (s. 29(2)). Note that at the bottom of the list of regulation-making powers for AMPs set out in s. 29(4). This provision allows the Minister to make regulations “respecting the persons or classes of persons who may exercise any power, or perform any duty or function, in relation to the scheme.” There is a good chance that the AMPs will (eventually) be administered by the new Personal Information and Data Tribunal, which is created in Part 2 of Bill C-27. This, at least, will provide some separation between the Minister and the imposition of financial penalties. If this is the plan, though, the draft law should say so.

It is clear that not all breaches of the obligations in the AIDA will be ones for which AMPs are available. Regulations will specify the breach of which provisions of the AIDA or its regulations will constitute a violation (s. 29(4)(a)). The regulations will also indicate whether the breach of the particular obligation is classified as minor, serious or very serious (s. 29(4)(b)). The regulations will also set out how any such proceedings will unfold. As-yet undrafted regulations will also specify the amounts or ranges of AMPS, and factors to take into account in imposing them.

This lack of important detail makes it hard not to think of the oversight and enforcement scheme in the AIDA as a rough draft sketched out on a cocktail napkin after an animated after-hours discussion of what enforcement under the AIDA should look like. Clearly, the goal is to be ‘agile’, but ‘agile’ should not be confused with slapdash. Parliament is being asked to enact a law that leaves many essential components undefined. With so much left to regulations, one wonders whether all the missing pieces can (or will) be put in place within this decade. There are instances of other federal laws left incomplete by never-drafted regulations. For example, we are still waiting for the private right of action provided for in Canada’s Anti-Spam Law, which cannot come into effect until the necessary regulations are drafted. A cynic might even say that failing to draft essential regulations is a good way to check the “enact legislation on this issue” box on the to-do list, without actually changing the status quo.

This is the third in my series of posts on the Artificial Intelligence and Data Act (AIDA) found in Bill C-27, which is part of a longer series on Bill C-27 generally. Earlier posts on the AIDA have considered its purpose and application, and regulated activities. This post looks at the harms that the AIDA is designed to address.

The proposed Artificial Intelligence and Data Act (AIDA), which is the third part of Bill C-27, sets out to regulate ‘high-impact’ AI systems. The concept of ‘harm’ is clearly important to this framework. Section 4(b) of the AIDA states that a purpose of the legislation is “to prohibit certain conduct in relation to artificial intelligence systems that may result in serious harm to individuals or harm to their interests”.

Under the AIDA, persons responsible for high-impact AI systems have an obligation to identify, assess, and mitigate risks of harm or biased output (s. 8). Those persons must also notify the Minister “as soon as feasible” if a system for which they are responsible “results or is likely to result in material harm”. There are also a number of oversight and enforcement functions that are triggered by harm or a risk of harm. For example, if the Minister has reasonable grounds to believe that a system may result in harm or biased output, he can demand the production of certain records (s. 14). If there is a serious risk of imminent harm, the Minister may order a person responsible to cease using a high impact system (s. 17). The Minister is also empowered to make public certain information about a system where he believes that there is a serious risk of imminent harm and the publication of the information is essential to preventing it (s. 28). Elevated levels of harm are also a trigger for the offence in s. 39, which involves “knowing or being reckless as to whether the use of an artificial intelligence system is likely to cause serious physical or psychological harm to an individual or substantial damage to an individual’s property”.

‘Harm’ is defined in s. 5(1) to mean:

(a) physical or psychological harm to an individual;

(b) damage to an individual’s property; or

(c) economic loss to an individual.

I have emphasized the term “individual” in this definition because it places an important limit on the scope of the AIDA. First, it is unlikely that the term ‘individual’ includes a corporation. Typically, the word ‘person’ is considered to include corporations, and the word ‘person’ is used in this sense in the AIDA. This suggests that “individual” is meant to have a different meaning. The federal Interpretation Act is silent on the issue. It is a fair interpretation of the definition of ‘harm’ that “individual” is not the same as “person”, and means an individual (human) person. The French version uses the term “individu”, and not “personne”. The harms contemplated by this legislation are therefore to individuals and not to corporations.

Defining harm in terms of individuals has other ramifications. The AIDA defines high-risk AI systems in terms of their impacts on individuals. Importantly, this excludes groups and communities. It also very significantly focuses on what are typically considered quantifiable harms, and uses language that suggests quantifiability (economic loss, damage to property, physical or psychological harm). Some important harms may be difficult to establish or to quantify. For example, class action lawsuits relating to significant data breaches have begun to wash up on the beach of lost causes due to the impossibility of proving material loss either because, although thousands may have been impacted, the individual losses are impossible to quantify, or because it is impossible to prove a causal link between very real identity theft and that particular data breach. Consider an AI system that manipulates public opinion through an algorithm that drives content to individuals based on its shock value rather than its truth. Say this happens during a pandemic and it convinces people that they should not get vaccinated or take other recommended public health measures. Say some people die because they were misled in this way. Say other people die because they were exposed to infected people who were misled in this way. How does one prove the causal link between the physical harm of injury or death of an individual and the algorithm? What if there is an algorithm that manipulates voter sentiment in a way that changes the outcome of an election? What is the quantifiable economic loss or psychological harm to any individual? How could causation be demonstrated? The harm, once again, is collective.

The EU AI Act has also been criticized for focusing on individual harm, but the wording of that law is still broader than that in the AIDA. The EU AI Act refers to high-risk systems in terms of “harm to the health and safety or a risk of adverse impact on fundamental rights of persons”. This at least introduces a more collective dimension, and it avoids the emphasis on quantifiability.

The federal government’s own Directive on Automated Decision-Making (DADM) which is meant to guide the development of AI used in public sector automated decision systems (ADS) also takes a broader approach to impact. In assessing the potential impact of an ADS, the DADM takes into account: “the rights of individuals or communities”, “the health or well-being of individuals or communities”, “the economic interests of individuals, entities, or communities”, and “the ongoing sustainability of an ecosystem”.

With its excessive focus on individuals, the AIDA is simply tone deaf to the growing global understanding of collective harm caused by the use of human-derived data in AI systems.

One response of the government might be to point out that the AIDA is also meant to apply to “biased output”. Biased output is defined in the AIDA as:

content that is generated, or a decision, recommendation or prediction that is made, by an artificial intelligence system and that adversely differentiates, directly or indirectly and without justification, in relation to an individual on one or more of the prohibited grounds of discrimination set out in section 3 of the Canadian Human Rights Act, or on a combination of such prohibited grounds. It does not include content, or a decision, recommendation or prediction, the purpose and effect of which are to prevent disadvantages that are likely to be suffered by, or to eliminate or reduce disadvantages that are suffered by, any group of individuals when those disadvantages would be based on or related to the prohibited grounds. (s. 5(1)) [my emphasis]

The argument here will be that the AIDA will also capture discriminatory biases in AI. However, I have underlined the part of this definition that once again returns the focus to individuals, rather than groups. It can be very hard for an individual to demonstrate that a particular decision discriminated against them (especially if the algorithm is obscure). In any event, biased AI will tend to replicate systemic discrimination. Although it will affect individuals, it is the collective impact that is most significant – and this should be recognized in the law. The somewhat obsessive focus on individual harm in the AIDA may unwittingly help perpetuate denials of systemic discrimination.

It is also important to note that the definition of “harm” does not include “biased output”, and while the terms are used in conjunction in some cases (for example, in s. 8’s requirement to “identify, assess and mitigate the risks of harm or biased output”), other obligations relate only to “harm”. Since the two are used conjunctively in some parts of the statute, but not others, a judge interpreting the statute might presume that when only one of the terms is used, then it is only that term that is intended. Section 17 of the AIDA allows the Minister to order a person responsible for a high-impact system to cease using it or making it available if there is a “serious risk of imminent harm”. Section 28 permits the Minister to order the publication of information related to an AI system where there are reasonable grounds to believe that the use of the system gives rise to “a serious risk of imminent harm”. In both cases, the defined term ‘harm’ is used, but not ‘biased output’.

The goals of the AIDA to protect against harmful AI are both necessary and important, but in articulating the harm that it is meant to address, the Bill underperforms.

This is the second in a series of posts on Bill C-27’s proposed Artificial Intelligence and Data Act (AIDA). The first post looked at the scope of application of the AIDA. This post considers what activities and what data will be subject to governance.

Bill C-27’s proposed Artificial Intelligence and Data Act (AIDA) governs two categories of “regulated activity” so long as they are carried out “in the course of international or interprovincial trade and commerce”. These are set out in s. 5(1):

(a) processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system;

(b) designing, developing or making available for use an artificial intelligence system or managing its operations.

These activities are cast in broad terms, capturing activities related both to the general curating of the data that fuel AI, and the design, development, distribution and management of AI systems. The obligations in the statute do not apply universally to all engaged in the AI industry. Instead, different obligations apply to those performing different roles. The chart below identifies the actor in the left-hand column, and the obligation the column on the right.

 

Actor

Obligation

A person who carries out any regulated activity and who processes or makes available for use anonymized data in the course of that activity

(see definition of “regulated activity” in s. 5(1)

s. 6 (data anonymization, use and management)

s. 10 (record keeping regarding measures taken under s. 6)

A person who is responsible for an artificial intelligence system (see definition of ‘person responsible’ in s. 5(2)

s. 7 (assess whether a system is high impact)

s. 10 (record keeping regarding reasons supporting their assessment of whether the system is high-impact under s. 7)

A person who is responsible for a high-impact system (see definition of ‘person responsible’ in s. 5(2; definition of “high-impact” system, s. 5(1))

s. 8 (measures to identify, assess and mitigate risk of harm or biased output)

s. 9 (measures to monitor compliance with the mitigation measures established under s. 8 and the effectiveness of the measures

s. 10 (record keeping regarding measures taken under ss. 8 and 9)

s. 12 (obligation to notify the Minister as soon as feasible if the use of the system results or is likely to result in material harm)

A person who makes available for use a high-impact system

s. 11(1) (publish a plain language description of the system and other required information)

A person who manages the operation of a high-impact system

s. 11(2) (publish a plain language description of how the system is used and other required information)

 

For most of these provisions, the details of what is actually required by the identified actor will depend upon regulations that have yet to be drafted.

A “person responsible” for an AI system is defined in s. 5(2) of the AIDA in these terms:

5(2) For the purposes of this Part, a person is responsible for an artificial intelligence system, including a high-impact system, if, in the course of international or interprovincial trade and commerce, they design, develop or make available for use the artificial intelligence system or manage its operation.

Thus, the obligations in ss. 7, 8, 9, 10 and 11, apply only to those engaged in the activities described in s. 5(1)(b) (designing, developing or making available an AI system or managing its operation). Further, it is important to note that with the exception of sections 6 and 7, the obligations in the AIDA also apply only to ‘high impact’ systems. The definition of a high-impact system has been left to regulations and is as yet unknown.

Section 6 stands out somewhat as a distinct obligation relating to the governance of data used in AI systems. It applies to a person who carries out a regulated activity and who “processes or makes available for use anonymized data in the course of that activity”. Of course, the first part of the definition of a regulated activity includes someone who processes or makes available for use “any data relating to human activities for the purpose of designing, developing or using” an AI system. So, this obligation will apply to anyone “who processes or makes available for use anonymized data” (s. 6) in the course of “processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system” (s. 5(1)). Basically, then for s. 6 to apply, the anonymized data must be processed for the purposes of development of an AI system. All of this must also be in the course if international or interprovincial trade and commerce.

Note that the first of these two purposes involves data “related to human activities” that are used in AI. This is interesting. The new Consumer Privacy Protection Act (CPPA) that forms the first part of Bill C-27 will regulate the collection, use and disclosure of personal data in the course of commercial activity. However, it provides, in s. 6(5), that: “For greater certainty, this Act does not apply in respect of personal information that has been anonymized.” By using the phrase “data relating to human activities” instead of “personal data”, s. 5(1) of the AIDA clearly addresses human-derived data that fall outside the definition of personal information in the CPPA because of anonymization.

Superficially, at least, s. 6 of the AIDA appears to pick up the governance slack that arises where anonymized data are excluded from the scope of the CPPA. [See my post on this here]. However, for this to happen, the data have to be used in relation to an “AI system”, as defined in the legislation. Not all anonymized data will be used in this way, and much will depend on how the definition of an AI system is interpreted. Beyond that, the AIDA only applies to a ‘regulated activity’ which is one carried out in the course of international and inter-provincial trade and commerce. It does not apply outside the trade and commerce context, nor does it apply to any excluded actors [as discussed in my previous post here]. As a result, there remain clear gaps in the governance of anonymized data. Some of those gaps might (eventually) be filled by provincial governments, and by the federal government with respect to public-sector data usage. Other gaps – e.g., with respect to anonymized data used for purposes other than AI in the private sector context – will remain. Further, governance and oversight under the proposed CPPA will be by the Privacy Commissioner of Canada, an independent agent of Parliament. Governance under the AIDA (as will be discussed in a forthcoming post) is by the Minister of Industry and his staff, who are also responsible for supporting the AI industry in Canada. Basically, the treatment of anonymized data between the CPPA and the AIDA creates a significant governance gap in terms of scope, substance and process.

On the issue of definitions, it is worth making a small side-trip into ‘personal information’. The definition of ‘personal information’ in the AIDA provides that the term “has the meaning assigned by subsections 2(1) and (3) of the Consumer Privacy Protection Act.” Section 2(1) is pretty straightforward – it defines “personal information” as “information about an identifiable individual”. However, s. 2(3) is more complicated. It provides:

2(3) For the purposes of this Act, other than sections 20 and 21, subsections 22(1) and 39(1), sections 55 and 56, subsection 63(1) and sections 71, 72, 74, 75 and 116, personal information that has been de-identified is considered to be personal information.

The default rule for ‘de-identified’ personal information is that it is still personal information. However, the CPPA distinguishes between ‘de-identified’ (pseudonymized) data and anonymized data. Nevertheless, for certain purposes under the CPPA – set out in s. 2(3) – de-identified personal information is not personal information. This excruciatingly-worded limit on the meaning of ‘personal information’ is ported into the AIDA, even though the statutory provisions referenced in s. 2(3) are neither part of AIDA nor particularly relevant to it. Since the legislator is presumed not to be daft, then this must mean that some of these circumstances are relevant to the AIDA. It is just not clear how. The term “personal information” is used most significantly in the AIDA in the s. 38 offense of possessing or making use of illegally obtained personal information. It is hard to see why it would be relevant to add the CPPA s. 2(3) limit on the meaning of ‘personal information’ to this offence. If de-identified (not anonymized) personal data (from which individuals can be re-identified) are illegally obtained and then used in AI, it is hard to see why that should not also be captured by the offence.

 

This is the first of a series of posts on the part of Bill C-27 that would enact a new Artificial Intelligence and Data Act (AIDA) in Canada. Previous posts have considered the part of the bill that would reform Canada’s private sector data protection law. This series on the AIDA begins with an overview of its purpose and application.

Bill C-27 contains the text of three proposed laws. The first is a revamped private sector data protection law. The second would establish a new Data Tribunal that is assigned a role under the data protection law. The third is a new Artificial Intelligence and Data Act (AIDA) While the two other components were present in the bill’s failed predecessor Bill C-11, the AIDA is new – and for many came as a bit of a surprise. The common thread, of course, is the government’s Digital Charter, which set out a series of commitments for building trust in the digital and data economy.

The preamble to Bill C-27, as a whole, addresses both AI and data protection concerns. Where it addresses AI regulation directly, it identifies the need to harmonize with national and international standards for the development and deployment of AI, and the importance of ensuring that AI systems uphold Canadian values in line with the principles of international human rights law. The preamble also signals a need for a more agile regulatory framework – something that might go towards justifying why so much of the substance of AI governance in the AIDA has been left to the development of regulations. Finally, the preamble speaks of a need “to foster an environment in which Canadians can seize the benefits of the digital and data-driven economy and to establish a regulatory framework that supports and protects Canadian norms and values, including the right to privacy.” This, then, frames how AI regulation (and data protection) will work in Canada – an attempt to walk a tightrope between enabling fast-paced innovation and protecting norms, values and privacy rights.

Regulating the digital economy has posed some constitutional (division of powers) challenges for the federal government, and these challenges are evident in the AIDA, particularly with respect to the scope of application of the law. Section 4 sets out the dual purposes of the legislation:

(a) to regulate international and interprovincial trade and commerce in artificial intelligence systems by establishing common requirements, applicable across Canada, for the design, development and use of those systems; and

(b) to prohibit certain conduct in relation to artificial intelligence systems that may result in serious harm to individuals or harm to their interests.

By focusing on international and interprovincial trade and commerce, the government asserts its general trade and commerce jurisdiction, without treading on the toes of the provinces, who remain responsible for intra-provincial activities. Yet, this means that there will be important gaps in AI regulation. Until the provinces act, these will be with respect to purely provincial AI solutions, whether in the public or private sectors, and, to a large extent, AI in the not-for-profit sector. However, this could get complicated since the AIDA sets out obligations for a range of actors, some of which could include international or interprovincial providers of AI systems to provincial governments.

The second purpose set out in s. 4 suggests that at least when it comes to AI systems that may result in serious harm, the federal jurisdiction over criminal law may be invoked. The AIDA creates a series of offences that could be supported by this power – yet, ultimately the offences relate to failures to meet the obligations that arise based on being engaged in a ‘regulated activity’, which takes one back to activities carried out in the course of international or interprovincial trade and commerce. The federal trade and commerce power thus remains the backbone of this bill.

Although there would be no constitutional difficulties with the federal government exerting jurisdiction over its own activities, the AIDA specifically excludes its application to federal government institutions, as defined in the Privacy Act. Significantly, it also does not apply to products, services or activities that are under the control of the Minister of National Defence, the Canadian Security Intelligence Service, the Communications Security Establishment or any other person who is responsible for a federal or provincial department or agency that is prescribed by regulation. This means that the AIDA would not apply even to those AI systems developed by the private sector for any of the listed actors. The exclusions are significant, particularly since the AIDA seems to be focussed on the prevention of harm to individuals (more on this in a forthcoming post) and the parties excluded are ones that might well develop or commission the development of AI that could (seriously) adversely impact individuals. It is possible that the government intends to introduce or rely upon other governance mechanisms to ensure that AI and personal data are not abused in these contexts. Or not. In contrast, the EU’s AI Regulation addresses the perceived need for latitude when it comes to national defence via an exception for “AI systems developed or used exclusively for military purposes” [my emphasis]. This exception is nowhere near as broad as that in the AIDA, which excludes all “products, services or activities under the control of the Minister of National defence”. Note that the Department of National Defence (DND) made headlines in 2020 when it contracted for an AI application to assist in hiring; it also made headlines in 2021 over an aborted psyops campaign in Canada. There is no reason why non-military DND uses of AI should not be subject to governance.

The government might justify excluding the federal public sector from governance under the AIDA on the basis that it is already governed by the Directive on Automated Decision-Making. This Directive applies to automated decision-making systems developed and used by the federal government, although there are numerous gaps in its application. For example, it does not apply to systems adopted before it took effect, it applies only to automated decision systems and not to other AI systems, and it currently does not apply to systems used internally (e.g., to govern public sector employees). It also does not have the enforcement measures that the AIDA has, and, since government systems could well be high-impact, this seems like a gap in governance. Consider in this respect the much-criticized ArriveCan App, designed for COVID-19 border screening and now contemplated for much broader use at border entries into Canada. The app has been criticized for its lack of transparency, and for the ‘glitch’ that sent inexplicable quarantine orders to potentially thousands of users. The ArriveCan app went through the DADM process, but clearly this is not enough to address governance issues.

Another important limit on the application of the AIDA is that most of its obligations apply only to “high impact systems”. This term is defined in the legislation as “an artificial intelligence system that meets the criteria for a high-impact system that are established in regulations.” This essentially says that this crucial term in the Bill will mean what cabinet decides it will mean at some future date. It is difficult to fully assess the significance or impact of this statute without any sense of how this term will be defined. The only obligations that appear to apply more generally are the obligation in s. 6 regarding the anonymization of data used or intended for use in AI systems, and the obligation in s. 10 to keep records regarding the anonymization measures taken.

By contrast, the EU’s AI Regulation applies to all AI systems. These fall into one of four categories: unacceptable risk, high-risk, limited risk, and low/minimal risk. Those systems that fall into the first category are banned. Those in the high-risk category are subject to the regulation’s most stringent requirements. Limited-risk AI systems need only meet certain transparency requirements and low-risk AI is essentially unregulated. Note that Canada’s approach to ‘agile’ regulation is to address only one category of AI systems – those that fall into the as-yet undefined category of high ‘impact’. It is unclear whether this is agile or supine. It is also not clear what importance should be given to the choice of the word ‘impact’ rather than ‘risk’. However, it should be noted that risk refers not just to actual but to potential harm, whereas ‘impact’ seems to suggest actual harm. Although one should not necessarily read too much into this choice of language, the fact that this important element is left to regulations means that Parliament will be asked to enact a law without understanding its full scope of application. This seems like a problem.

 

Privacy is a human right. It is recognized the United Nations Declaration of Human Rights and other international human rights instruments. In Canada, the Supreme Court of Canada has interpreted the. 8 Charter right to be secure against unreasonable search or seizure as a privacy right, and it has also found that data protection laws in Canada have ‘quasi-constitutional’ status because of the importance of the privacy rights on which they are premised. The nature of privacy as a human right should not be a controversial proposition, but it became so in Bill C-11, the 2020 Bill to reform the Personal Information Protection and Electronic Documents Act (PIPEDA). Bill C-11 did not address the human rights dimensions of data protection, and it was soundly criticized by the former Privacy Commissioner of Canada for failing to do so. Bill C-27, which contains the new PIPEDA reform bill, and which was introduced in June 2022, gives a nod to the human rights dimensions of data protection. This post will consider whether this is enough.

There are several reasons why the human rights dimensions of data protection law became such an issue in Canada. Data protection laws balance the privacy rights of individuals with the needs of organizations and governments to collect and use personal information for a range of purposes. If a balance is to be struck between two things, the weight given to considerations on either side of the scale must be appropriate. Recognizing the human rights dimensions of the protection of personal data gives added weight to the interests of individuals (and communities) by acknowledging the importance that control over personal data has to the exercise of a variety of human rights (including, but not limited to, dignity, autonomy and freedom from discrimination). It also acknowledges the substantial threats that the data economy can pose to human rights. Second, the EU’s General Data Protection Regulation puts the human rights dimensions of privacy and data protection front and centre. Once this has been done across the EU, the omission of a similar approach from draft legislation in Canada takes on greater significance. It starts to look like a deliberate statement. Third, Quebec takes an explicit human-rights based approach to privacy, making it – well, awkward – to have a less human rights-forward standard crafted for the rest of Canada. In Ontario, a government White Paper considering a private sector data protection law for Ontario explicitly endorsed a human rights-based approach.

The federal government’s hesitation to address the human rights dimensions of privacy is rooted in its anxiety over the constitutional footing for a federal private sector data protection law. PIPEDA has been constitutionally justified under the federal government’s general trade and commerce power. This means that it is enacted to regulate an aspect of trade and commerce at the national level. PIPEDA focuses on data collected, used, and disclosed by the private sector in the course of commercial activity. The government’s concern is that adopting a human rights-based approach would transform the statute from one that addresses the management of personal data in the commercial context to one that governs human rights as they relate to personal data. Constitutional anxiety is evident even in the new name of the future data protection law: The Consumer Privacy Protection Act [my emphasis].

The former Privacy Commissioner of Canada, Daniel Therrien, commissioned a legal opinion on the issues of constitutionality linked to adopting a human rights-based approach. This opinion found that the legislation could support such an approach within the general trade and commerce framework. The federal government clearly takes a different view, which may be rooted in an almost pathological division-of-powers anxiety. After all, this government also refused to defend the constitutional challenge to the Genetic Non-Discrimination Act, even though the constitutionality of that statute (which began its life as a private-member’s bill) was ultimately upheld by a majority of the Supreme Court of Canada.

One of the changes in Bill C-27 from Bill C-11 is the addition of a preamble. It is in this preamble that the government now makes reference to the human rights basis for privacy. The preamble also enumerates other considerations, making it clear that the interests (or rights) of individuals are just one factor in a rather complex balance. The other factors include the importance of trade and free flows of data, the need to support and foster the data-driven economy, the need for an agile regulatory framework, the need to not unduly burden small businesses, the need for harmonization, and the importance of facilitating data collection and use in the public interest.

The clauses in the preamble that address privacy and human rights include an acknowledgement that the protection of personal information is essential to the autonomy and dignity of individuals and to their full enjoyment of their fundamental rights and freedoms in Canada. This is probably the strongest statement and it is near the top of the list. There is also an acknowledgement of the importance of privacy and data protection principles found in international instruments. There are some references to human rights in relation to AI, but those relate to the Artificial Intelligence and Data Act that is part of this Bill. There is also a closing paragraph which refers to bolstering the digital and data-driven economy by establishing a regulatory framework “that supports and protects Canadian norms and values, including the right to privacy”. At best, however, this just emphasizes that the right to privacy is one factor in the balance – and not necessarily the predominant one. The government has been reasonably explicit in the preamble about the range of competing public policy considerations that feed into their data protection bill. The overall message is: “Yes, privacy is a human right, but we’re trying to do something here.”

Bill C-27 also includes the text of a proposed Artificial Intelligence and Data Act (AIDA). This statute is arguably the government’s attempt to address human rights in the AI and data context, in that it contains measures meant to address discriminatory bias in AI (which is fueled by data). It is meant to apply to ‘high impact’ systems (not defined in the Bill), although impact certainly seems to be understood in terms of harms to individuals. Next week my series of posts will begin to consider the AIDA in more detail. For present purposes, however, consider that the AIDA will only apply to systems defined as ‘high impact’; it addresses only individual and not group harms; it will apply only in the context of AI (whereas data are used in many more contexts); and many organisations and institutions are excluded from its scope. In any event, while the proper governance of AI is of great importance, so is the proper governance of personal data, which is the domain of data protection legislation. The AIDA is therefore not an answer to concerns over the need for a human rights-based approach to data protection.

I have argued for a human rights-based approach to privacy in data protection law. The volumes of data collected, the way these data are used and shared, and the potential impacts they can have on peoples’ lives all suggest that we can no longer mince words when it comes to understanding the significance of data protection. Technology now reduces just about anything to streams of data, and those data are used to profile, categorize, assess, and monitor individuals. They are used in tools of surveillance and control. Although we talk the talk of individual consent and control, such liberal fictions are no longer sufficient to provide the protection needed to ensure that individuals and the communities to which they belong are not exploited through the data harvested from them. This is why acknowledging the role that data protection law plays in protecting human rights, autonomy and dignity is so important. This is why the human rights dimension of privacy should not just be a ‘factor’ to take into account alongside stimulating innovation and lowering the regulatory burden on industry. It is the starting point and the baseline. Innovation is good, but it cannot be at the expense of human rights.

In Canada we have relied upon the normative idea in s. 5(3) of PIPEDA that any collection, use or disclosure of personal information must be “for purposes that a reasonable person would consider are appropriate in the circumstances”. This normative concept is also found in s. 12(1) of Bill C-27. Although past privacy commissioners have given substance to this provision, the concern remains that without an anchor in an explicitly human rights-based approach, the ‘reasonable person’ might, over time, be interpreted to be more excited about the potential of data to boost the economy than concerned about the adverse effects its use might have on certain individuals or groups. Given that Bill C-27 will shift interpretive authority over key concepts in the legislation from the Privacy Commissioner to the mysterious Data Tribunal, this normative wiggle-room is particularly concerning.

In spite of this, the addition of a preamble to Bill C-27, with its references to privacy and human rights is probably all that we are going to get from this government on this issue. There is not much interest in going back to the drawing board with this Bill, and the government is no doubt impatient to move the data protection law reform file forward.

In the meantime, it is worth noting that the provinces remain free to enact and/or amend their own private sector data protection laws, and to make strong statements about a human-rights-basis for data protection. The laws in Alberta and British Columbia will be reformed once a new federal bill is passed. And, with a newly re-elected government, Ontario might once again turn its attention to crafting its own law. There are other fronts on which this battle can be fought, and perhaps it is best to turn attention to these.

 

Monday, 25 July 2022 06:34

Bill C-27 and Children’s Privacy

Note: This is the fifth in a series of posts on Canada's Bill C-27 which, among other things, will reform Canada's private sector data protection law.

Bill C-27, the bill to amend Canada’s existing private sector data protection law, gives particular attention to the privacy rights of minors in a few instances. This is different from the current law, and it is a change since the previous (failed) reform bill, Bill C-11. The additions to Bill C-27 respond to concerns raised by privacy advocates and scholars regarding Bill C-11’s silence on children’s privacy.

Directly addressing children’s privacy has been a bit of a struggle for this government, which seems particularly sensitive to federal-provincial division of powers issues. After all, it is the provinces that get to determine the age of majority. A private sector data protection law that defined a child in terms of a particular age range for the purposes of consent, for example, might raise constitutional hackles. Further, many of the privacy issues that concern parents the most are ones that fall at least to some extent within provincial jurisdiction. Consider the issues around children’s privacy and educational technologies used in schools. While many of those technologies are sourced from the private sector, the schools themselves are subject to provincial public sector data protection laws, and so, the schools’ adoption and use of these technologies is governed by provincial legislation. That said, children still spend a great deal of time online; their toys are increasingly connected to the Internet of Things; their devices and accompanying apps capture and transmit all manner of data; and they, their parents and friends post innumerable pictures, videos and anecdotes about them online. Children have a clear interest in private sector data protection.

The government’s modest response to concerns about children’s privacy in Bill C-27 no doubt reflects this constitutional anxiety. The most significant provision is found in s. 2(2), which states that “For the purposes of this Act, the personal information of minors is considered to be sensitive information.” Note that the reference is to ‘minors’ and not ‘children’, and no attempt is made to define the age of majority.

If you search Bill C-27 for further references to minors, you will find few. Two important ones are found in s. 55, which deals with the right of erasure. This right, which allows an individual to request the deletion of their data, has number of significant exceptions to it. However, two of these exceptions do not apply in the case of the data of minors (see my post on the right of erasure). The first of these allows an organization to deny a request for erasure if “the disposal of the information would have an undue adverse impact on the accuracy or integrity of information that is necessary to the ongoing provision of a product or service to the individual in question”. The second allows an organization to deny a request for deletion if the data is subject to a data retention policy. Neither exception to the right of erasure applies in the case of the data of minors. This is important as it will allow minors (or those acting on their behalf) to obtain deletion of data – even outside the organization’s regular disposal schedule.

The Personal Information Protection and Electronic Documents Act currently links valid consent to a person’s capacity to understand “the nature, purpose and consequences of the collection, use or disclosure of the personal information to which they are consenting” (s. 6.1). Bill C-11 would have eliminated this requirement for valid consent. Responding to criticisms, the government in Bill C-27, has added a requirement that consent must be sought “in plain language that an individual to whom the organization’s activities are directed would reasonably be expected to understand.” (s. 15(4)) It is good to see this element returned to the reform bill, even if it is a little half-hearted compared to PIPEDA’s s. 6.1. In this regard, Bill C-27 is an improvement over C-11. (See my post on consent in Bill C-27).

Although no other provisions are specifically drafted for minors, per se, declaring that the personal information of minors is considered ‘sensitive’ is significant in a Bill that requires organizations to give particular attention to the sensitivity of personal data in a range of circumstances. For example, an organization’s overall privacy management program must take into account both the volume and sensitivity of the information that the organization collects (s. 9(2)). The core normative principle in the legislation, which limits the collection, use and disclosure of personal information to that which a reasonable person would consider appropriate in the circumstances also requires a consideration of the sensitivity of personal data (s. 12(2)(a)). In determining whether an organization can rely upon implied consent, the sensitivity of the information is a relevant factor (s. 15(5)). Organizations, in setting data retention limits, must take into account, among other things, the sensitivity of personal data (s. 53(2)), and they must provide transparency with respect to those retention periods (s. 62(2)(e)). The security safeguards developed for personal data must take into account its sensitivity (s. 57(1)). When there is a data breach, the obligation to report the breach to the Commissioner depends upon a real risk of significant harm – one of the factors in assessing such a risk is the sensitivity of the personal data (s. 58(8)). When data are de-identified, the measures used for de-identification must take into account the sensitivity of the data, and the Commissioner, in exercising his powers, duties or functions must also consider the sensitivity of the personal data dealt with by an organization (s. 109).

The characterization of the data of minors as ‘sensitive’ means that the personal data of children – no matter what it is – will be treated as sensitive data in the interpretation and application of the law. In practical terms, this is not new. The Office of the Privacy Commissioner has consistently treated the personal data of children as sensitive. However, it does not hurt to make this approach explicit in the law. In addition, the right of erasure for minors is an improvement over both PIPEDA and Bill C-11. Overall, then, Bill C-27 offers some enhancement to the data protection rights of minors.

As part of my series on Bill C-27, I will be writing about both the proposed amendments to Canada’s private sector data protection law and the part of the Bill that will create a new Artificial Intelligence and Data Act (AIDA). So far, I have been writing about privacy, and my posts on consent, de-identification, data-for-good, and the right of erasure are already available. Posts on AIDA, will follow, although I still have a bit more territory on privacy to cover first. However, in the meantime, as a teaser, perhaps you might be interested in playing a bit of statutory MadLibs…...

Have you ever played MadLibs? It’s a paper-and-pencil game where someone asks the people in the room to supply a verb, noun, adverb, adjective, or body part, and the provided words are used to fill in the blanks in a story. The results are often absurd and sometimes hilarious.

The federal government’s proposal in Bill C-27 for an Artificial Intelligence and Data Act, really lends itself to a game of statutory MadLibs. This is because some of the most important parts of the bill are effectively left blank – either the Minister or the Governor-in-Council is tasked in the Bill with filling out the details in regulations. Do you want to play? Grab a pencil, and here goes:

Company X is developing an AI system that will (insert definition of ‘high impact system). It knows that this system is high impact because (insert how a company should assess impact). Company X has established measures to mitigate potential harms by (insert measures the company took to comply with the regulations) and has also recorded (insert records it kept), and published (insert information to be published).

Company X also had its system audited by an auditor who is (insert qualifications). Company X is being careful, because if it doesn’t comply with (insert a section of the Act for which non-compliance will count as a violation), it could be found to have committed a (insert degree of severity) violation. This could lead to (insert type of proceeding).

Company X, though, will be able to rely on (insert possible defence). However, if (insert possible defence) is unsuccessful, Company X may be liable to pay an Administrative Monetary Penalty if they are a (insert category of ‘person’) and if they have (insert factors to take into account). Ultimately, if they are unhappy with the outcome, they can launch a (insert a type of appeal proceeding).

Because of this regulatory scheme, Canadians can feel (insert emotion) at how their rights and interests are protected.

Bill C-27, which will amend Canada’s private sector data protection law, contains a right of erasure. In its basic form, this right allows individuals to ask an organization to dispose of the personal information it holds about them. It is sometimes referred to as the right to be forgotten, although the right to be forgotten has different dimensions that are not addressed in Bill C-27. Bill C-27’s predecessor, Bill C-11, had proposed a right of erasure in fairly guarded terms: individuals would be able to request the disposal only of information that the organization had obtained from the individual. This right would not have extended to information the organization had collected through other means – by acquiring that information from other organizations, scraping it from the internet, or even creating it through profiling algorithms. Section 55 of Bill C-27 (“disposal at individual’s request”) brings some interesting changes to this limitation. Significantly, it extends the right of erasure to the individual’s personal information that “is under the organization’s control”. Nevertheless, in doing so, it also adds some notable restrictions.

First, Bill C-27’s right of erasure will only apply in three circumstances. The first, set out in s. 55(1)(a), is where the information was collected, used or disclosed in contravention of the Act. Basically, if an organization had no right to have or use the personal data in the first place, it must dispose of the information at the request of the individual.

The second situation, set out in s. 55(1)(b), is where an individual has withdrawn their consent to the collection, use or disclosure of the information held by the organization. Perhaps a person agreed to allow an organization to collect certain data in addition to the data considered necessary to providing a particular product or service. If that person decides they no longer want the organization to collect this additional data, not only can they withdraw consent to its continued collection, they can exercise their right to erasure and have the already-collected data deleted.

Finally, s. 55(1)(c) allows an individual to request deletion of personal data where the information is no longer necessary for the continued provision of a product or service requested by the individual. If an individual ceases to do business with an organization, for example, and does not wish the organization to retain their personal information, they can request its deletion. Here, the expansion of the right to include all personal information under the organization’s control can be important. For example, if you terminate your contract with a streaming service, you could request deletion not just of the customer data you provided to them, and your viewing history, but also the organization’s inexplicable profile of you as someone who loves zombie movies.

Where an organization has acceded to a request for disposal of personal data, it is also obliged, under s. 55(4), to inform “any service provider” to which it has transferred the data to dispose of them. The organization is responsible for ensuring this takes place. Note, however, that the obligation is only to inform any service provider, defined in the bill as an entity that “provides services for or on behalf” of the organization to assist it in fulfilling its purposes. The obligation to notify does not extend to those to whom the data may have been sold.

There are, however, important exceptions to this expanded right of erasure. Subsection 55(2) would allow an organization to refuse to dispose of data under s. 55(1)(b) or (c) in circumstances where it is inseparable from the personal data of another person (for example, that embarrassing photo of you partying with others that someone else posted online); other legal requirements require the organization to retain the information; or the organization requires the data for a legal defence or legal remedy.

A few other exceptions are potentially more problematic. Paragraph 55(2)(d) creates an exception to the right of erasure where:

(d) the information is not in relation to a minor and the disposal of the information would have an undue adverse impact on the accuracy or integrity of information that is necessary to the ongoing provision of a product or service to the individual in question;

For example, this might apply in the case where an individual remains in a commercial relationship with an organization, but has withdrawn consent to a particular use or disclosure of their data and has requested its deletion. If the organization believes that deleting the information would adversely affect the integrity of the product or service they continue to provide to the individual, they can refuse deletion. It will be interesting to see how this plays out. There may be a matter of opinion about the impacts on the integrity of the product or service being supplied. If an individual finds an organization’s recommendation service based on past purchases or views to be largely useless, seeking deletion of data about their viewing history will not impact the integrity of the service from the individual’s point of view – but the organization might have a different opinion.

In Bill C-27, the government responded to criticisms that its predecessor, Bill C-11, did nothing to specifically deal with children’s privacy. Bill C-27 addresses the privacy of minors in specific instances, and the right of erasure is one of them. Interestingly, the right of erasure prevails under s. 55(2)(d) for minors, presumably even when the erasure would have an “undue adverse impact on the accuracy or integrity of information that is necessary to the ongoing provision of a product or service”. It seems that minors will get to choose between deletion and adverse impacts, while those over the age of majority will have to put up with retention and uses of their personal data to which they object.

Another exception to the right also applies only to those past the age of majority. Paragraph 55(2)(f) provides that an organization may refuse a request for disposal of personal information if:

(f) the information is not in relation to a minor and it is scheduled to be disposed of in accordance with the organization’s information retention policy, and the organization informs the individual of the remaining period of time for which the information will be retained.

What this means is that if an organization has a retention policy that conforms to s. 53 of Bill C-27 (one that provides for the destruction of personal information once it is no longer necessary for the purposes for which it was collected, used or disclosed), then it can refuse a request for erasure – unless, of course, it is a minor who requests erasure. In that case, they must act in advance of the normal disposal schedule. This provision was no doubt added to save organizations from the burden of having to constantly respond to requests for erasure of personal data. For large swathes of personal data, for example, they can prepare a standard response that informs a requestor of their retention policy and provides the timetable on which the data will be deleted once it is no longer necessary to fulfill the purposes for which it was collected. If this provision can also be relied upon when an individual ceases to do business with an organization and requests the deletion of their information, then the right of erasure in Bill C-27 will become effectively useless in the case of any company with a data retention policy. Except, of course, for minors.

Finally, organizations will be given the right to refuse to consider requests for deletion that are “vexatious or made in bad faith”. Let’s hit pause here. This exception is to protect commercial entities against data subjects. I understand that organizations do not want to be subject to mass campaigns for data deletion– or serial requests by individuals – that overwhelm them. That might happen. However, the standard form email that will be part of the ‘regular deletion schedule’ exception discussed above will largely suffice to address this problem. Organizations now have enormous abilities to collect massive amounts of personal data and to use these data for a wide variety of purposes. Many do this responsibly, but there are endless examples of overcollection, over-retention, excessive sharing, poor security, and outright abuses of personal data. The right of erasure is a new right for individuals to help them exercise greater control over their personal data in a context in which such data are often flagrantly misused. To limit this right based on what an organization considers vexatious is a demonstration of how the balance in Bill C-27 leans towards the free flow and use of personal data rather than the protection of privacy.

It is important to note that there is yet another limit on the right of erasure, which is found in Bill C-27’s definition of ‘dispose’. According to this definition, dispose means “to permanently and irreversibly delete personal information or to anonymize it. Thus, an organization can choose to anonymize personal data, and once it has done so, the right of erasure is not available. (See my post on anonymized and deidentified data for what ‘anonymized’ means). Section 2(3) of Bill C-27 also removes the right of erasure where information is merely de-identified (pseudonymized). This seems like an internal contradiction in the legislation. Disposal means deletion or rigorous anonymization – but, under s. 2(3), a company can just pseudonymize to avoid a request for disposal. The difference seems to be that pseudonymized data may still eventually need to be disposed of under data retention limits, whereas anonymized data can be kept forever.

All told, as a right that is meant to give more control to individuals, the right of erasure in Bill C-27 is a bit of a bust. Although it allows an individual to ask an organization to delete data (and not just data that the individual provided), the right is countered by a great many bases on which the organization can avoid it. It’s a bit of a ‘Canadian compromise’ (one of the ones in which Canadians get compromised): individuals get a new right; organizations get to side-step it.

 

 

[Note: This is my third in a series of posts on the new Bill C-27 which will reform private sector data protection law in Canada and which will add a new Artificial Intelligence and Data Act. The previous two posts addressed consent and de-identification/anonymization.]

In 2018 a furore erupted over media reports that Statistics Canada (StatCan) sought to collect the financial data of a half a million Canadians from Canadian banks to generate statistical data. Reports also revealed that it had already collected a substantial volume of personal financial data from credit agencies. The revelations led to complaints to the Privacy Commissioner, who carried out an investigation and issued an interim and a final report. One outcome was that StatCan worked with the Office of the Privacy Commissioner of Canada to develop a new approach to the collection of such data. Much more recently, there were expressions of public outrage when media reported that the Public Health Agency of Canada (PHAC) had acquired de-identified mobility data about Canadians from Telus in order to inform their response to the COVID-19 pandemic. This led to hearings before the ETHI Standing Committee of the House of Commons, and resulted in a report with a series of recommendations.

Both of these instances involved attempts by government institutions or agencies to make use of existing private sector data to enhance their analyses or decision-making. Good policy is built on good data; we should support and encourage the responsible use of data by government in its decision-making. At the same time, however, there is clearly a deep vein of public distrust in government – particularly when it comes to personal data – that cannot be ignored. Addressing this distrust requires both transparency and strong protection for privacy.

Bill C-27, introduced in Parliament in June 2022, proposes a new Consumer Privacy Protection Act to replace the aging Personal Information Protection and Electronic Documents Act (PIPEDA). As part of the reform, this private sector data protection bill contains provisions that are tailored to address the need of government – as well as the commercial data industry – to access personal data in the hands of the private sector.

Two provisions in C-27 are particularly relevant here: sections 35 and 39. Section 35 deals specifically with the sharing of private sector data for the purposes of statistics and research. Section 7(3)(f) of PIPEDA contains an exception that is similar to s. 35. Section 39 is entirely new. Section 39 deals with the use of data for “socially beneficial purposes”. Both s. 35 and s. 39 were in the predecessor to C-27, Bill C-11. Only section 35 has been changed since C-11 – a small change significantly broadens its scope.

Section 35 of Bill C-27 provides:

35 An organization may disclose an individual’s personal information without their knowledge or consent if

(a) the disclosure is made for statistical purposes or for study or research purposes and those purposes cannot be achieved without disclosing the information;

(b) it is impracticable to obtain consent; and

(c) the organization informs the Commissioner of the disclosure before the information is disclosed.

This provision would enable the kind of data sharing by the private sector that was involved in the StatCan example mentioned above, and that was previously enabled by s. 7(3)(f) of PIPEDA. As currently the case under PIPEDA, s. 35 would allow for the sharing of personal information without an individual’s knowledge or consent. It is important to note that there is no requirement that the personal information be de-identified or anonymized in any way (see my earlier post on de-identification and anonymization here). The remainder of s. 35 imposes the only limitations on such sharing. One of these relates to purpose. The sharing must be for “statistical purposes” (but note that StatCan is not the only organization that engages in statistical activities, and such sharing is not limited to StatCan). It can also be for “study or research purposes”. Bill C-11, like PIPEDA, had referred to “scholarly study or research purposes”. The removal of ‘scholarly’ substantially enlarges the scope of this provision (for example, market research and voter profile research would no doubt count). There is a further qualifier – the statistical, study, or research purposes have to be ones that “cannot be achieved without disclosing the information”. However, they do not have to be ‘socially beneficial’ (although there is an overarching provision in s. 5 that requires that the purposes for collecting, using or disclosing personal information be ones that a ‘reasonable person would consider appropriate in the circumstances’). Section 35(b) (as is the case under PIPEDA’s s. 7(3)(f)) also requires that it be impracticable to obtain consent. This is not really much of a barrier. If you want to use the data of a half a million individuals, for example, it is really not practical to seek their consent. Finally, the organization must inform the Commissioner of the disclosure prior to it taking place. This provides a thin film of transparency. Another nod and a wink to transparency is found in s. 62(2)(b), which requires organizations to provide a ‘general account’ of how they apply “the exceptions to the requirement to obtain an individual’s consent under this Act”.

Quebec’s Loi 25 also addresses the use of personal information in the hands of the private sector for statistical and research purposes without individual consent. Unlike Bill C-27, it contains more substantive guardrails:

21. A person carrying on an enterprise may communicate personal information without the consent of the persons concerned to a person or body wishing to use the information for study or research purposes or for the production of statistics.

The information may be communicated if a privacy impact assessment concludes that

(1) the objective of the study or research or of the production of statistics can be achieved only if the information is communicated in a form allowing the persons concerned to be identified;

(2) it is unreasonable to require the person or body to obtain the consent of the persons concerned;

(3) the objective of the study or research or of the production of statistics outweighs, with regard to the public interest, the impact of communicating and using the information on the privacy of the persons concerned;

(4) the personal information is used in such a manner as to ensure confidentiality; and

(5) only the necessary information is communicated.

The requirement of a privacy impact assessment (PIA) in Loi 25 is important, as is the condition that this assessment consider the goals of the research or statistical activity in relation to the public interest and to the impact on individuals. Loi 25 also contains important limitations on how much information is shared. Bill C-27 addresses none of these issues. At the very least, as is the case under Quebec law, there should be a requirement to conduct a PIA with similar considerations – and to share it with the Privacy Commissioner. Since this is data sharing without knowledge or consent, there could even be a requirement that the PIAs be made publicly available, with appropriate redactions if necessary.

Some might object that there is no need to incorporate these safeguards in the new private sector data protection law since those entities (such as StatCan) who receive the data have their own secure policies and practices in place to protect data. However, under s. 35 there is no restriction on who may receive data for statistical, study or research purposes, and no reason to assume that they have appropriate safeguards in place. If they do, then the PIA can reflect this.

Section 39 addresses the sharing of de-identified personal information for socially beneficial purposes. Presumably, this would be the provision under which, in the future, mobility data might be shared with an agency such as PHAC. Under s. 39:

39 (1) An organization may disclose an individual’s personal information without their knowledge or consent if

(a) the personal information is de-identified before the disclosure is made;

(b) the disclosure is made to

(i) a government institution or part of a government institution in Canada,

(ii) a health care institution, post-secondary educational institution or public library in Canada,

(iii) any organization that is mandated, under a federal or provincial law or by contract with a government institution or part of a government institution in Canada, to carry out a socially beneficial purpose, or

(iv) any other prescribed entity; and

(c) the disclosure is made for a socially beneficial purpose.

(2) For the purpose of this section, socially beneficial purpose means a purpose related to health, the provision or improvement of public amenities or infrastructure, the protection of the environment or any other prescribed purpose.

This provision requires that shared information must be de-identified, although as noted in my earlier post, de-identification in Bill C-27 no longer means what it did in C-11. The data shared may have only direct identifiers removed leaving individuals easily identifiable. The disclosure must be for socially beneficial purposes, and it must be to a specified or prescribed entity. I commented on the identical provision in C-11 here, so I will not repeat in detail those earlier concerns from that post. They remain unaddressed in Bill C-27. The most significant gap is the lack of a requirement for a data governance agreement to be in place between the parties based upon the kinds of considerations that would be relevant in a privacy impact assessment.

Where the sharing is to be with a federal government institution, the Privacy Act should provide additional protection. However, the Privacy Act is itself an antediluvian statute that has long been in need of reform. It is worth noting that while the doors to data sharing are opened in Bill C-27, many of the necessary safeguards – at least where government is concerned – are left for another statute in the hands of another department, and that lies who-knows-where in the government’s legislative agenda (although rumours are that we might see a Bill this fall [Warning: holding your breath could be harmful to your health.]). In its report on the sharing of mobility data with PHAC, ETHI calls for much greater transparency about data use on the part of the Government of Canada, and also calls for enhanced consultation with the Privacy Commissioner prior to engaging in this form of data collection. Apart from the fact that these pieces will not be in place – if at all – until the Privacy Act is reformed, the exceptions in sections 35 and 39 of C-27 apply to organizations and institutions outside the federal government, and thus, can involve institutions and entities not subject to the Privacy Act. Guardrails should be included in C-27 (as they are, for example, in Loi 25); yet, they are absent.

As noted earlier, there are sound reasons to facilitate the use of personal data to aid in data-driven decision-making that serves the public interest. However, any such use must protect individual privacy. Beyond this, there is also a collective privacy dimension to the sharing of even anonymized human-derived data. This should also not be ignored. It requires greater transparency and public engagement, along with appropriate oversight by the Privacy Commissioner. Bill C-27 facilitates use without adequately protecting privacy – collective or individual. Given the already evident lack of trust in government, this seems either tone-deaf or deeply cynical.

 

 

 

 

 

 

 

This is the second post in a series on Bill C-27, a bill introduced in Parliament in June 2022 to reform Canada's private sector data protection law. The first post, on consent provisions, is found here.

In a data-driven economy, data protection laws are essential to protect privacy. In Canada, the proposed Consumer Privacy Protection Act in Bill C-27 will, if passed, replace the aging Personal Information Protection and Electronic Documents Act (PIPEDA) to govern the collection, use and disclosure of personal information by private sector organizations. Personal information is defined in Bill C-27 (as it was in PIPEDA) as “information about an identifiable individual”. The concept of identifiability of individuals from information has always been an important threshold issue for the application of the law. According to established case law, if an individual can be identified directly or indirectly from data, alone or in combination with other available data, then those data are personal information. Direct identification comes from the presence of unique identifiers that point to specific individuals (for example, a name or a social insurance number). Indirect identifiers are data that, if combined with other available data, can lead to the identification of individuals. To give a simple example, a postal code on its own is not a direct identifier of any particular individual, but in a data set with other data elements such as age and gender, a postal code can lead to the identification of a specific individual. In the context of that larger data set, the postal code can constitute personal information.

As the desire to access and use more data has grown in the private (and public) sector, the concepts of de-identification and anonymization have become increasingly important in dealing with personal data that have already been collected by organizations. The removal of both direct and indirect identifiers from personal data can protect privacy in significant ways. PIPEDA did not define ‘de-identify’, nor did it create particular rules around the use or disclosure of de-identified information. Bill C-11, the predecessor to C-27, addressed de-identified personal information, and contained the following definition:

de-identify means to modify personal information — or create information from personal information — by using technical processes to ensure that the information does not identify an individual or could not be used in reasonably foreseeable circumstances, alone or in combination with other information, to identify an individual

This definition was quite inclusive (information created from personal information, for example, would include synthetic data). Bill C-11 set a relative standard for de-identification – in other words, it accepted that de-identification was sufficient if the information could not be used to identify individuals “in reasonably foreseeable circumstances”. This was reinforced by s. 74 which required organizations that de-identified personal information to use measures that were proportionate to the sensitivity of the information and the way in which the information was to be used. De-identification did not have to be perfect – but it had to be sufficient for the context.

Bill C-11’s definition of de-identification was criticized by private sector organizations that wanted de-identified data to fall outside the scope of the Act. In other words, they sought either an exemption from the application of the law for de-identified personal information, or a separate category of “anonymized” data that would be exempt from the law. According to this view, if data cannot be linked to an identifiable individual, then they are not personal data and should not be subject to data protection law. For their part, privacy advocates were concerned about the very real re-identification risks, particularly in a context in which there is a near endless supply of data and vast computing power through which re-identification can take place. These concerns are supported by research (see also here and here). The former federal Privacy Commissioner recommended that it be made explicit that the legislation would apply to de-identified data.

The changes in Bill C-27 reflect the power of the industry lobby on this issue. Bill C-27 creates separate definitions for anonymized and de-identified data. These are:

anonymize means to irreversibly and permanently modify personal information, in accordance with generally accepted best practices, to ensure that no individual can be identified from the information, whether directly or indirectly, by any means.

[. . .]

de-identify means to modify personal information so that an individual cannot be directly identified from it, though a risk of the individual being identified remains. [my emphasis]

Organizations will therefore be pleased that there is now a separate category of “anonymized” data, although such data must be irreversibly and permanently modified to ensure that individuals are not identifiable. This is harder than it sounds; there is, even with synthetic data, for example, still some minimal risk of reidentification. An important concern, therefore, is whether the government is actually serious about this absolute standard, whether it will water it down by amendment before the bill is enacted, or whether it will let interpretation and argument around ‘generally accepted best practices’ soften it up. To ensure the integrity of this provision, the law should enable the Privacy Commissioner to play a clear role in determining what counts as anonymization.

Significantly, under Bill C-27, information that is ‘anonymized’ would be out of scope of the statute. This is made clear in a new s. 6(5) which provides that “this Act does not apply in respect of personal information that has been anonymized”. The argument to support this is that placing data that are truly anonymized out of scope of the legislation creates an incentive for industry to anonymize data, and anonymization (if irreversible and permanent) is highly privacy protective. Of course, similar incentives can be present if more tailored exceptions are created for anonymized data without it falling ‘out of scope’ of the law.

Emerging and evolving concepts of collective privacy take the view that there should be appropriate governance of the use of human-derived data, even if it has been anonymized. Another argument for keeping anonymized data in scope relates to the importance of oversight, given re-identification risks. Placing anonymized data outside the scope of data protection law is contrary to the recent recommendations of the ETHI Standing Committee of the House of Commons following its hearings into the use of de-identified private sector mobility data by the Public Health Agency of Canada. ETHI recommended that the federal laws be amended “to render these laws applicable to the collection, use, and disclosure of de-identified and aggregated data”. Aggregated data is generally considered to be data that has been anonymized. The trust issues referenced by ETHI when it comes to the use of de-identified data reinforce the growing importance of notions of collective privacy. It might therefore make sense to keep anonymized data within scope of the legislation (with appropriate exceptions to maintain incentives for anonymization) leaving room for governance of anonymization.

Bill C-27 also introduces a new definition of “de-identify”, which refers to modifying data so that individuals cannot be directly identified. Direct identification has come to mean identification through specific identifiers such as names, or assigned numbers. The new definition of ‘de-identify’ in C-27 suggests that simply removing direct identifiers will suffice to de-identify personal data (a form of what, in the GDPR, is referred to as pseudonymization). Thus, according to this definition, as long as direct identifiers are removed from a data set, an organization can use data without knowledge or consent in certain circumstances, even though specific individuals might still be identifiable from those data. While it will be argued that these circumstances are limited, the exception for sharing for ‘socially beneficial purposes’ is disturbingly broad given this weak definition (more to come on this in a future blog post). In addition, the government can add new exceptions to the list by regulation.

The reference in the definition of ‘de-identify’ only to direct identification is meant to be read alongside s. 74 of Bill C-27, which provides:

74 An organization that de-identifies personal information must ensure that any technical and administrative measures applied to the information are proportionate to the purpose for which the information is de-identified and the sensitivity of the personal information.

Section 74 remains unchanged from Bill C-11, where it made more sense, since it defined de-identification in terms of direct or indirect identifiers using a relative standard. In the context of the new definition of ‘de-identify’, it is jarring, since de-identification according to the new definition requires only the removal of direct identifiers. What this, perhaps, means is that although the definition of de-identify only requires removal of direct identifiers, actual de-identification might mean something else. This is not how definitions are supposed to work.

In adopting these new definitions, the federal government sought to align its terminology with that used in Quebec’s Loi 25 that reformed its public and private sector data protection laws. The Quebec law provides, in a new s. 23, that:

[. . .]

For the purposes of this Act, information concerning a natural person is anonymized if it is, at all times, reasonably foreseeable in the circumstances that it irreversibly no longer allows the person to be identified directly or indirectly.

Information anonymized under this Act must be anonymized according to generally accepted best practices and according to the criteria and terms determined by regulation.

Loi 25 also provides that data is de-identified (as opposed to anonymized) “if it no longer allows the person concerned to be directly identified”. At first glance, it seems that Bill C-27 has adopted similar definitions – but there are differences. First, the definition of anonymization in Loi 25 uses a relative standard (not an absolute one as in C-27). It also makes specific reference not just to generally accepted best practices, but to criteria and terms to be set out in regulation, whereas in setting standards for anonymization, C-27 refers only to “generally accepted best practices”. [Note that in its recommendations following its hearings into the use of de-identified private sector mobility data by the Public Health Agency of Canada, the ETHI Committee of Parliament recommended that federal data protection laws should include “a standard for de-identification of data or the ability for the Privacy Commissioner to certify a code of practice in this regard.”]

Second, and most importantly, in the Quebec law, anonymized data does not fall outside the scope of the legislation –instead, a relative standard is used to provide some flexibility while still protecting privacy. Anonymized data are still subject to governance under the law, even though the scope of that governance is limited. Further, under the Quebec law, recognizing that the definition of de-identification is closer to pseudonymization, the uses of de-identified data are more restricted than they are in Bill C-27.

Further, in an eye-glazing bit of drafting, s. 2(3) of Bill C-27 provides:

2(3) For the purposes of this Act, other than sections 20 and 21, subsections 22(1) and 39(1), sections 55 and 56, subsection 63(1) and sections 71, 72, 74, 75 and 116, personal information that has been de-identified is considered to be personal information.

This is a way of saying that de-identified personal information remains within the scope of the Act except where it does not. Yet, data that has only direct identifiers stripped from it should always be considered personal information, since the reidentification risk, as noted above, could be very high. What s. 2(3) does is allow de-identified data to be treated as anonymized (out of scope) in some circumstances. For example, s. 21 allows organizations to use ‘de-identified’ personal information for internal research purposes without knowledge or consent. The reference in s. 2(3) amplifies this by providing that such information is not considered personal information. As a result, presumably, other provisions in Bill C-27 would not apply. This might include data breach notification requirements – yet if information is only pseudonymized and there is a breach, it is not clear why such provisions should not apply. Pseudonymization might provide some protection to those affected by a breach, although it is also possible that the key was part of the breach, or that individuals remain re-identifiable in the data. The regulator should have jurisdiction. Subsection 22(1) allows for the use and even the disclosure of de-identified personal information between parties to a prospective business transaction. In this context, the de-identified information is not considered personal information (according to s. 2(3)) and so the only safeguards are those set out in s. 22(1) itself. Bizarrely, s. 22(1) makes reference to the sensitivity of the information – requiring safeguards appropriate to its sensitivity, even though it is apparently not considered personal information. De-identified (not anonymized) personal information can also be shared without knowledge or consent for socially beneficial purposes under s. 39(1). (I have a blog post coming on this provision, so I will say no more about it here, other than to note that given the definition of ‘de-identify’, such sharing seems rash and the safeguards provided are inadequate). Section 55 provides for a right of erasure of personal information; since information stripped of direct identifiers is not personal information for the purposes of section 55 (according to s. 2(3)), this constitutes an important limitation on the right of erasure. If data are only pseudonymized, and if the organization retains the key, then why is there no right of erasure? Section 56 addresses the accuracy of personal information. Personal information de-identified according to the definition in C-27 would also be exempted from this requirement.

In adopting the definitions of ‘anonymize’ and ‘de-identify’, the federal government meets a number of public policy objectives. It enhances the ability of organizations to make use of data. It also better aligns the federal law with Quebec’s law (at least at the definitional level). The definitions may also create scope for other privacy protective technologies such as pseudonymization (which is what the definition of de-identify in C-27 probably really refers to) or different types of encryption. But the approach it has adopted creates the potential for confusion, for risks to privacy, and for swathes of human-derived data to fall ‘outside the scope’ of data protection law. The government view may be that, once you stir all of Bill C-27’s provisions into the pot, and add a healthy dose of “trust us”, the definition of “de-identify” and its exceptions are not as problematic as they are at first glance. Yet, this seems like a peculiar way to draft legislation. The definition should say what it is supposed to say, rather than have its defects mitigated by a smattering of other provisions in the law and faith in the goodness of others and the exceptions still lean towards facilitating data use rather than protecting privacy.

In a nutshell, C-27 has downgraded the definition of de-identification from C-11. It has completely excluded from the scope of the Act anonymized data, but has provided little or no guidance beyond “generally accepted best practices” to address anonymization. If an organization claims that their data are anonymized and therefore outside of the scope of the legislation, it will be an uphill battle to get past the threshold issue of anonymization in order to have a complaint considered under what would be the new law. The organization can simply dig in and challenge the jurisdiction of the Commissioner to investigate the complaint.

All personal data, whether anonymized or ‘de-identified’ should remain within the scope of the legislation. Specific exceptions can be provided where necessary. Exceptions in the legislation for the uses of de-identified information without knowledge or consent must be carefully constrained and reinforced with safeguards. Further, the regulator should play a role in establishing standards for anonymization and de-identification. This may involve consultation and collaboration with standards-setting bodies, but references in the legislation must be to more than just “generally accepted best practices”.

<< Start < Prev 1 2 3 4 5 6 7 8 9 10 Next > End >>
Page 1 of 36

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law