Teresa Scassa - Blog

Displaying items by tag:

Canada’s Privacy Commissioner has released a set of findings that recognize a right to be forgotten (RTBF) under the Personal Information Protection and Electronic Documents Act (PIPEDA). The complainant’s long legal journey began in 2017 when they complained that a search of their name in Google’s search engine returned news articles from many years earlier regarding an arrest and criminal charges relating to having sexual activity without disclosing their status as being HIV positive. Although these reports were accurate at the time they were published, the charges were stayed shortly afterwards, because the complainant posed no danger to public health. Charging guidelines for the offence in question indicated that no charges should be laid where there is no realistic possibility that HIV could be transmitted. The search results contain none of that information. Instead, they publicly disclose the HIV status of the complainant, and they create the impression that their conduct was criminal in nature. As a result of the linking of their name to these search results, the complainant experienced – and continues to experience – negative consequences including social stigma, loss of career opportunities and even physical violence.

Google’s initial response to the complaint was to challenge the jurisdiction of the Privacy Commissioner to investigate the matter under PIPEDA, arguing that PIPEDA did not apply to its search engine functions. The Commissioner referred this issue to the Federal Court, which found that PIPEDA applied. That decision was (unsuccessfully) appealed by Google to the Federal Court of Appeal. When the matter was not appealed further to the Supreme Court of Canada, the Commissioner began his investigation which resulted in the current findings. Google has indicated that it will not comply with the Commissioner’s recommendation to delist the articles so that they do not appear in a search using the complainant’s name. This means that it is likely that an application will be made to Federal Court for a binding order. The matter is therefore not yet resolved.

This post considers three issues. The first relates to the nature and scope of the RTBF in PIPEDA, as found by the Commissioner. The second relates to the Commissioner’s woeful lack of authority when it comes to the enforcement of PIPEDA. Law reform is needed to address this, yet Bill C-27, which would have given greater enforcement powers to the Commissioner, died on the order paper. The government’s intentions with respect to future reform and its timing remain unclear. The third point also addresses PIPEDA reform. I consider the somewhat fragile footing for the Commissioner’s version of the RTBF given how Bill C-27 had proposed to rework PIPEDA’s normative core.

The Right to be Forgotten (RTBF) and PIPEDA

In his findings, the Commissioner grounds the RTBF in an interpretation of s. 5(3) of PIPEDA:

5(3) An organization may collect, use or disclose personal information only for purposes that a reasonable person would consider are appropriate in the circumstances.

This is a core normative provision in PIPEDA. For example, although organizations may collect personal information with the consent of the individual, they cannot do so if the collection is for purposes that a reasonable person would not consider appropriate in the circumstances. This provision (or at least one very similar to it in Alberta’s Personal Information Protection Act), was recently found to place important limits on the scraping of photographs from the public internet by Clearview AI to create a massive facial recognition (FRT) database. Essentially, even though the court found that photographs posted on the internet were publicly available and could be collected and used without consent, they could not be collected and used to create a FRT database because this was not a purpose a reasonable person would consider appropriate in the circumstances.

The RTBF would function much in the same way when it comes to the operations of platform search engines. Those search engines – such as Google’s – collect, use and disclose information found on the public internet when they return search results to users in response to queries. When searches involve individuals, search results may direct users to personal information about that individual. That is acceptable – as long as the information is being collected, used and disclosed for purposes a reasonable person would consider appropriate in the circumstances. In the case of the RTBF, according to the Commissioner, the threshold will be crossed when the privacy harms caused by the disclosure of the personal information in the search results outweigh the public interest in having that information shared through the search function. In order to make that calculation, the Commissioner articulates a set of criteria that can be applied on a case-by-case basis. The criteria include:

a. Whether the individual is a public figure (e.g. a public office holder, a politician, a prominent business person, etc.);

b. Whether the information relates to an individual’s working or professional life as opposed to their private life;

c. Whether the information relates to an adult as opposed to a minor;

d. Whether the information relates to a criminal charge that has resulted in a conviction or where the charges were stayed due to delays in the criminal proceedings;

e. Whether the information is accurate and up to date;

f. Whether the ability to link the information with the individual is relevant and necessary to the public consideration of a matter under current controversy or debate;

g. The length of time that has elapsed since the publication of the information and the request for de-listing. (at para 109)

In this case, the facts were quite compelling, and the Commissioner had no difficulty finding that the information at issue caused great harm to the complainant while providing no real public benefit. This led to the de-listing recommendation – which would mean that a search for the complainant’s name would no longer turn up the harmful and misleading articles – although the content itself would remain on the web and could be arrived at using other search criteria.

The Privacy Commissioner’s ‘Powers’

Unlike his counterparts in other jurisdictions, including the UK, EU member countries, and Quebec, Canada’s Privacy Commissioner lacks suitable enforcement powers. PIPEDA was Canada’s first federal data protection law, and it was designed to gently nudge organizations into compliance. It has been effective up to a point. Many organizations do their best to comply proactively, and the vast majority of complaints are resolved prior to investigation. Those that result in a finding of a breach of PIPEDA contain recommendations to bring the organization into compliance, and in many cases, organizations voluntarily comply with the recommendations. The legislation works – up to a point.

The problem is that the data economy has dramatically evolved since PIPEDA’s enactment. There is a great deal of money to be made from business models that extract large volumes of data that are then monetized in ways that are beyond the comprehension of individuals who have little choice but to consent to obscure practices laid out in complex privacy policies in order to receive services. Where complaint investigations result in recommendations that run up against these extractive business models, the response is increasingly to disregard the recommendations. Although there is still the option for a complainant or the Commissioner to apply to Federal Court for an order, the statutory process set out in PIPEDA requires the Federal Court to hold a hearing de novo. In other words, notwithstanding the outcome of the investigation, the court hears both sides and draws its own conclusions. The Commissioner, despite his expertise, is owed no deference.

In the proposed Consumer Protection Privacy Act (CPPA) that was part of the now defunct Bill C-27, the Commissioner was poised to receive some important new powers, including order-making powers and the ability to recommend the imposition of steep administrative monetary penalties. Admittedly, these new powers came with some clunky constraints that would have put the Commissioner on training wheels in the privacy peloton of his international counterparts. Still, it was a big step beyond the current process of having to ask the Federal Court to redo his work and reach its own conclusions.

Bill C-27, however, died on the order paper with the last federal election. The current government is likely in the process of pep-talking itself into reintroducing a PIPEDA reform bill, but as yet there is no clear timeline for action. Until a new bill is passed, the Commissioner is going to have to make do with his current woefully inadequate enforcement tools.

The Dangers of PIPEDA Reform

Assuming a PIPEDA reform bill will contain enforcement powers better adapted to a data-driven economy, one might be forgiven for thinking that PIPEDA reform will support the nascent RTBF in Canada (assuming that the Federal Court agrees with the Commissioner’s approach). The problem is, however, there could be some uncomfortable surprises in PIPEDA reform. Indeed, this RTBF case offers a good illustration of how tinkering with PIPEDA may unsettle current interpretations of the law – and might do so at the expense of privacy rights.

As noted above, the Commissioner grounded the RTBF on the strong and simple principle at the core of PIPEDA and expressed in s. 5(3), which I repeat here for convenience:

5(3) An organization may collect, use or disclose personal information only for purposes that a reasonable person would consider are appropriate in the circumstances.

The Federal Court of Appeal has told us that this is a normative standard – in other words, the fact that millions of otherwise reasonable people may have consented to certain terms of service does not on its own make those terms something that a reasonable person would consider appropriate in the circumstances. The terms might be unduly exploitative but leave individuals with little or no choice. The reasonableness inquiry sets a standard for the level of privacy protection an individual should be entitled to in a given set of circumstances.

Notably, Bill C-27 sought to disrupt the simplicity of s. 5(3), replacing it with the following:

12 (1) An organization may collect, use or disclose personal information only in a manner and for purposes that a reasonable person would consider appropriate in the circumstances, whether or not consent is required under this Act.

(2) The following factors must be taken into account in determining whether the manner and purposes referred to in subsection (1) are appropriate:

(a) the sensitivity of the personal information;

(b) whether the purposes represent legitimate business needs of the organization;

(c) the effectiveness of the collection, use or disclosure in meeting the organization’s legitimate business needs;

(d) whether there are less intrusive means of achieving those purposes at a comparable cost and with comparable benefits; and

(e) whether the individual’s loss of privacy is proportionate to the benefits in light of the measures, technical or otherwise, implemented by the organization to mitigate the impacts of the loss of privacy on the individual.

Although s. 12(1) is not so different from s. 5(3), the government saw fit to add a set of criteria in s. 12(2) that would shape any analysis in a way that leans the decision-maker towards accommodating the business needs of the organization over the privacy rights of the individual. Paragraph 12(2)(b) and (c) explicitly require the decision-maker to think about the legitimate business needs of the organization and the effectiveness of the particular collection, use or disclosure in meeting those needs. In an RTBF case, this might mean thinking about how indexing the web and returning search results meets the legitimate business needs of a search engine company and does so effectively. It then asks whether there are “less intrusive means of achieving those purposes at a comparable cost and with comparable benefits”. This too focuses on the organization. Not only is this criterion heavily weighted in favour of business in terms of its substance – less intrusive means should be of comparable cost – the issues it raises are ones about which an individual challenging the practice would have great difficulty producing evidence. While the Commissioner has greater resources, these are still limited. The fifth criterion returns us to the issue of privacy, but it asks whether “the individual’s loss of privacy is proportionate to the benefits [to the organization] in light of the measures, technical or otherwise, implemented by the organization to mitigate the impacts of the loss of privacy on the individual”. The criteria in s. 12(2) fall over themselves to nudge a decision-maker towards finding privacy-invasive practices to be “for purposes that a reasonable person would consider appropriate in the circumstances” – not because a reasonable person would find them appropriate in light of the human right to privacy, but because an organization has a commercial need for the data and has fiddled about a bit to attempt to mitigate the worst of the impacts. Privacy essentially becomes what the business model will allow – the reasonable person is now an accountant.

It is also worth noting that by the time a reform bill is reintroduced (and if we dare to imagine it – actually passed), the Federal Court may have weighed in on the RTBF under PIPEDA, putting us another step closer to clarifying whether there is a RTBF in Canada’s private sector privacy law. Assuming that the Federal Court largely agrees with the Commissioner and his approach, if something like s. 12 of the CPPA becomes part of a new law, the criteria developed by the Commissioner for the reasonableness assessment in RTBF cases will be supplanted by the rather ugly list in s. 12(2). Not only will this cast doubt on the continuing existence of a RTBF, it may likely doom one. And this is not the only established interpretation/approach that will be unsettled by such a change.

The Commissioner’s findings in the RTBF investigation demonstrate the flexibility and simplicity of s. 5(3). When a PIPEDA reform bill returns to Parliament, let us hope that the s. 12(2) is no longer part of it.

 

Published in Privacy

Ontario is currently holding public hearings on a new bill which, among other things, introduces a provision regarding the use of AI in hiring in Ontario. Submissions can be made until February 13, 2024. Below is a copy of my submission addressing this provision.

 

The following is my written submission on section 8.4 of Bill 149, titled the Working for Workers Four Act, introduced in the last quarter of 2023. I am a law professor at the University of Ottawa. I am making this submission in my individual capacity.

Artificial intelligence (AI) tools are increasingly common in the employment context. Such tools are used in recruitment and hiring, as well as in performance monitoring and assessment. Section 8.4 would amend the Employment Standards Act to include a requirement for employers to provide notice of the use of artificial intelligence in the screening, assessment, or selection of applicants for a publicly advertised job position. It does not address the use of AI in other employment contexts. This brief identifies several weaknesses in the proposal and makes recommendations to strengthen it. In essence, notice of the use of AI in the hiring process will not offer much to job applicants without a right to an explanation and ideally a right to bring any concerns to the attention of a designated person. Employees should also have similar rights when AI is used in performance assessment and evaluation.

1. Definitions and exclusions

If passed, Bill 149 would (among other things) enact the first provision in Ontario to directly address AI. The proposed section 8.4 states:

8.4 (1) Every employer who advertises a publicly advertised job posting and who uses artificial intelligence to screen, assess or select applicants for the position shall include in the posting a statement disclosing the use of the artificial intelligence.

(2) Subsection (1) does not apply to a publicly advertised job posting that meets such criteria as may be prescribed.

The term “artificial intelligence” is not defined in the bill. Rather, s. 8.1 of Bill 149 leaves the definition to be articulated in regulations. This likely reflects concerns that the definition of AI will continue to evolve along with the rapidly changing technology and that it is best to leave its definition to more adaptable regulations. The definition is not the only thing left to regulations. Section 8.4(2) requires regulations to specify the criteria that will allow publicly advertised job postings to be exempted from the disclosure requirement in s. 8.4(1). The true scope and impact of s. 8.4(1) will therefore not be clear until these criteria are prescribed in regulations. Further, s. 8.4 will not take effect until the regulations are in place.

2. The Notice Requirement

The details of the nature and content of the notice that an employer must provide are not set out in s. 8.4, nor are they left to regulations. Since there are no statutory or regulatory requirements, presumably notice can be as simple as “we use artificial intelligence in our screening and selection process”. It would be preferable if notice had to at least specify the stage of the process and the nature of the technique used.

Section 8.4 is reminiscent of the 2022 amendments to the Employment Standards Act which required employers with more than 25 employees to provide their employees with notification of any electronic monitoring taking place in the workplace. As with s. 8.4(1), above, the main contribution of this provision was (at least in theory) enhanced transparency. However, the law did not provide for any oversight or complaints mechanism. Section 8.4(1) is similarly weak. If an employer fails to provide notice of the use of AI in the hiring process, then either the employer is not using AI in recruitment and hiring, or they are failing to disclose it. Who will know and how? A company that is found non-compliant with the notice requirement, once it is part of the Employment Standards Act, could face a fine under s. 132. However, proceedings by way of an offence are a rather blunt regulatory tool.

3. A Right to an Explanation?

Section 8.4(1) does not provide job applicants with any specific recourse if they apply for a job for which AI is used in the selection process and they have concerns about the fairness or appropriateness of the tool used. One such recourse could be a right to demand an explanation.

The Consumer Privacy Protection Act (CPPA), which is part of the federal government’s Bill C-27, currently before Parliament, provides a right to an explanation to those about whom an automated decision, prediction or recommendation is made. Sections 63(3) and (4) provide:

(3) If the organization has used an automated decision system to make a prediction, recommendation or decision about the individual that could have a significant impact on them, the organization must, on request by the individual, provide them with an explanation of the prediction, recommendation or decision.

(4) The explanation must indicate the type of personal information that was used to make the prediction, recommendation or decision, the source of the information and the reasons or principal factors that led to the prediction, recommendation or decision.

Subsections 63(3) and (4) are fairly basic. For example, they do not include a right of review of the decision by a human. But something like this would still be a starting point for a person seeking information about the process by which their employment application was screened or evaluated. The right to an explanation in the CPPA will extend to decisions, recommendations and predictions made with respect to employees of federal works, undertakings, and businesses. However, it will not apply to the use of AI systems in provincially regulated employment sectors. Without a private sector data protection law of its own – or without a right to an explanation to accompany the proposed s. 8.4 – provincially regulated employees in Ontario will be out of luck.

In contrast, Quebec’s recent amendments to its private sector data protection law provide for a more extensive right to an explanation in the case of automated decision-making – and one that applies to the employment and hiring context. Section 12.1 provides:

12.1. Any person carrying on an enterprise who uses personal information to render a decision based exclusively on an automated processing of such information must inform the person concerned accordingly not later than at the time it informs the person of the decision.

He must also inform the person concerned, at the latter’s request,

(1) of the personal information used to render the decision;

(2) of the reasons and the principal factors and parameters that led to the decision; and

(3) of the right of the person concerned to have the personal information used to render the decision corrected.

The person concerned must be given the opportunity to submit observations to a member of the personnel of the enterprise who is in a position to review the decision.

Section 12.1 thus combines a notice requirement with, at the request of the individual, a right to an explanation. In addition, the affected individual can “submit observations” to an appropriate person within the organization who “is in a position to review the decision”. This right to an explanation is triggered only by decisions that are based exclusively on automated processing of personal information – and the scope of the right to an explanation is relatively narrow. However, it still goes well beyond Ontario’s Bill 149, which creates a transparency requirement with nothing further.

4. Scope

Bill 149 applies to the use of “artificial intelligence to screen, assess or select applicants”. Bill C-27 and Quebec’s law, both referenced above, are focused on “automated decision-making”. Although automated decision-making is generally considered a form of AI (it is defined in C-27 as “any technology that assists or replaces the judgment of human decision-makers through the use of a rules-based system, regression analysis, predictive analytics, machine learning, deep learning, a neural network or other technique”) it is possible that in an era of generative AI technologies, the wording chosen for Bill 149 is more inclusive. In other words, there may be uses of AI that are not decision-making, predicting or recommending, but that can still used in screening, assessing or hiring processes. However, it should be noted that Ontario’s Bill 149 is also less inclusive than Bill C-27 or Quebec’s law because it focuses only on screening, assessment or selecting applicants for a position. It does not apply to the use of AI tools to monitor, evaluate or assess the performance of existing employees or to make decisions regarding promotion, compensation, retention, or other employment issues – something which would be covered by Quebec’s law (and by Bill C-27 for employees in federally regulated employment). Although arguably the requirements regarding electronic workplace monitoring added to the Employment Standards Act in 2022 might provide transparency about the existence of electronic forms of surveillance (which could include those used to feed data to AI systems), these transparency obligations apply only in workplaces with more than 25 employees, and there are no employee rights linked to the use of these data in automated or AI-enabled decision-making systems.

5. Discriminatory Bias

A very significant concern with the use of AI systems for decision-making about humans is the potential for discriminatory bias in the output of these systems. This is largely because systems are trained on existing and historical data. Where such data are affected by past discriminatory practices (for example, a tendency to hire men rather than women, or white, able-bodied, heterosexual people over those from equity-deserving communities) then there is a risk that automated processes will replicate and exacerbate these biases. Transparency about the use of an AI tool alone in such a context is not much help – particularly if there is no accompanying right to an explanation. Of course, human rights legislation applies to the employment context, and it will still be open to an employee who believes they have been discriminated against to bring a complaint to the Ontario Human Rights Commission. However, without a right to an explanation, and in the face of proprietary and closed systems, proving discrimination may be challenging and may require considerable resources and expertise. It may also require changes to human rights legislation to specifically address algorithmic discrimination. Without these changes in place, and without adequate resourcing to support the OHRC’s work to address algorithmic bias, recourse under human rights legislation may be extremely challenging.

 

6. Conclusion and Recommendations

This exploration of Bill 149’s transparency requirements regarding the use of AI in the hiring process in Ontario reveals the limited scope of the proposal. Its need for regulations in order take effect has the potential to considerably delay its implementation. It provides for notice but not for a right to an explanation or for human review of AI decisions. There is also a need to make better use of existing regulators (particularly privacy and human rights commissions). The issue of the use of AI in recruitment (or in the workplace more generally in Ontario) may require more than just tweaks to the Employment Standards Act but may also demand amendments to Ontario’s Human Rights Code and perhaps even specific privacy legislation at the very least aimed at the employment sector in Ontario.

Recommendations:

1. Redraft the provision so that the core obligations take effect without need for regulations or ensure that the necessary regulations to give effect to this provision are put in place promptly.

2. Amend s. 8.4 (1) to either include the elements that are required in any notice of the use of an AI system or provide for the inclusion of such criteria in regulations (so long as doing so does not further delay the coming into effect of the provision).

3. Provide for a right to an explanation to accompany s. 8.4(1). An alternative to this would be a broader right to an explanation in provincial private sector legislation or in privacy legislation for employees in provincially regulated sectors in Ontario, but this would be much slower than the inclusion of a basic right to an explanation in s. 8.4. The right to an explanation could also include a right to submit observations to a person in a position to review any decision or outcome.

4. Extend the notice requirement to other uses of AI to assess, evaluate and monitor the performance of employees in provincially regulated workplaces in Ontario. Ideally, a right to an explanation should also be provided in this context.

5. Ensure that individuals who are concerned that they have been discriminated against by the use of AI systems in recruitment (as well as employees who have similar concerns regarding the use of AI in performance evaluation and assessment) have adequate and appropriate recourse under Ontario’s Human Rights Code, and that the Ontario Human Rights Commission is adequately resourced to address these concerns.

Published in Privacy

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law