Teresa Scassa - Blog

Displaying items by tag: artificial intelligence

Artificial intelligence (AI) is already being used to assist government decision-making, although we have little case law that explores issues of procedural fairness when it comes to automated decision systems. This is why a recent decision of the Federal Court is interesting. In Barre v. Canada (Citizenship and Immigration) two women sought judicial review of a decision of the Refugee Protection Division (RPD) which had stripped them of their refugee status. They raised procedural fairness issues regarding the possible reliance upon an AI tool – in this case facial recognition technology (FRT). The case allows us to consider some procedural fairness guideposts that may be useful where evidence derived from AI-enabled tools is advanced.

The Decision of the Refugee Protection Division

The applicants, Ms Barre and Ms Hosh, had been granted refugee status after advancing claims related to their fear of sectarian and gender-based violence in their native Somalia. The Minister of Public Safety and Emergency Preparedness (the Minister) later applied under s. 109 of the Immigration and Refugee Protection Act to have that decision vacated on the basis that it was “obtained as a result of directly or indirectly misrepresenting or withholding material facts relating to a relevant matter”.

The Minister had provided the RPD with photos that compared Ms Barre and Ms Hosh the applicants) with two Kenyan women who had been admitted to Canada on student visas shortly before Ms Barre and Ms Hosh filed their refugee claims (the claims were accepted in 2017). The applicants argued that the photo comparisons relied upon by the Minister had been made using Clearview AI’s facial recognition service built upon scraped images from social media and other public websites. The Minister objected to arguments and evidence about Clearview AI, maintaining that there was no proof that this service had been used. Clearview AI had ceased providing services in Canada on 6 July 2020, and the RPD accepted the Minister’s argument that it had not been used, finding that “[a]n App that is banned to operate in Canada would certainly not be used by a law enforcement agency such as the CBSA” (at para 7). The Minister had also argued that it did not have to disclose how it arrived at the photo comparisons because of s. 22 of the Privacy Act, and the RPD accepted this assertion.

The photo comparisons were given significant weight in the RPD’s decision to overturn the applicants’ refugee status. The RPD found that there were “great similarities” between the photos of the Kenyan students and the applicants, and concluded that they were the same persons. The RPD also considered notes in the Global Case Management System to the effect that the Kenyan students did not attend classes at the school where they were enrolled. In addition, the CBSA submitted affidavits indicating that there was no evidence that the applicants had entered Canada under their own names. The RPD concluded that the applicants were Kenyan citizens who had misrepresented their identity in the refugee proceedings. It found that these factual misrepresentations called into question the credibility of their allegations of persecution. It also found that, since they were Kenyan, they had not advanced claims against their country of nationality in the refugee proceedings, as required by law. The applicants sought judicial review of the decision to revoke their refugee status, arguing that it was unreasonable and breached their rights to procedural fairness.

Judicial Review

Justice Go of the Federal Court ruled that the decision was unreasonable for a number of reasons. A first error was allowing the introduction of the photo comparisons into evidence “without requiring the Minister to disclose the methodology used in procuring the evidence” (at para 31). The Minister had invoked s. 22 of the Privacy Act, but Justice Go noted that there were many flaws with the Minister’s reliance on s. 22. Section 22 is an exception to an individual’s right of access to their personal information. Justice Go noted that the applicants were not seeking access to their personal information; rather, they were making a procedural fairness argument about the photo comparisons relied upon by the Minister and sought information about how the comparisons had been made. Section 22(2), which was specifically relied upon by the Minister, allows a request for disclosure of personal information to be refused on the basis that it was “obtained or prepared by the Royal Canadian Mounted Police while performing policing services for a province or municipality…”, and this circumstance simply was not relevant.

Section 22(1)(b), which was not specifically argued by the Minister, allows for a refusal to disclose personal information where to do so “could reasonably be expected to be injurious to the enforcement of any law of Canada or a province or the conduct of lawful investigations…” Justice Go noted that case law establishes that a court will not support such a refusal on the basis that because there is an investigation, harm from disclosure can be presumed. Instead, the head of an institution must demonstrate a “nexus between the requested disclosure and a reasonable expectation of probable harm” (at para 35, citing Canadian Association of Elizabeth Fry Societies v. Canada). Exceptions to access rights must be given a narrow interpretation, and the burden of demonstrating that a refusal to disclose is justifiable lies with the head of the government institution. Justice Go also noted that “the Privacy Act does not operate “so as to limit access to information to which an individual might be entitled as a result of other legal rules or principles”” (at para 42) such as, in this case, the principles of procedural fairness.

Justice Go found that the RPD erred by not clarifying what ‘personal information’ the Minister sought to protect; and by not assessing the basis for the Minister’s 22 arguments. She also noted that the RPD had accepted the Minister’s bald assertions that the CBSA did not rely on Clearview AI. Even if the company had ceased offering its services in Canada by July 6, 2020, there was no evidence regarding the date on which the photo comparisons had been made. Justice Go noted that the RPD failed to consider submissions by the applicants regarding findings by the privacy commissioners of Canada, BC, Alberta and Quebec regarding Clearview AI and its activities, as well as on the “danger of relying on facial recognition software” (at para 46).

The Minister argued that even if its s. 22 arguments were misguided, it could still rely upon evidentiary privileges to protect the details of its investigation. Justice Go noted that this was irrelevant in assessing the reasonableness of the RPD’s decision, since such arguments had not been made before or considered by the RPD. She also observed that when parties seek to exempt information from disclosure in a hearing, they are often required at least to provide it to the decision-maker to assess. In this case the RPD did not ask for or assess information on how the investigation had been conducted before deciding that information about it should not be disclosed. She noted that: “The RPD’s swift acceptance of the Minister’s exemption request, in the absence of a cogent explanation for why the information is protected from disclosure, appears to be a departure from its general practice” (at para 55).

Justice Go also observed that information about how the photo comparisons were made could well have been relevant to the issues to be determined by the RPD. If the comparisons were generated through use of FRT – whether it was using Clearview AI or the services of another company – “it may call into question the reliability of the Kenyan students’ photos as representing the Applicants, two women of colour who are more likely to be misidentified by facial recognition software than their white cohorts as noted by the studies submitted by the Applicants” (at para 56). No matter how the comparisons were made – whether by a person or by FRT technology – some evidence should have been provided to explain the technique. Justice Go found it unreasonable for the RPD to conclude that the evidence was reliable simply based upon the Minister’s assertions.

Justice Go also found that the RPD’s conclusion that the applicants were, in fact, the two Kenyan women, was unreasonable. Among other things, she found that the decision “failed to provide adequate reasons for the RPD’s conclusion that the two Applicants and the two Kenyan students were the same persons based on the photo comparisons” (at para 69). She noted that although the RPD referenced ‘great similarities’ between the women in the two sets of photographs, there were also some marked dissimilarities which were not addressed. There simply was no adequate explanation as to how the conclusion was reached that the applicants were the Kenyan students.

The decision of the RPD was quashed and remitted to be reconsidered by a differently constituted panel of the RPD.

Ultimately, Justice Go sends a clear message that the Minister cannot simply advance photo comparison evidence without providing an explanation for how that evidence was derived. At the very least, then, there is an obligation to indicate whether an AI technology was used in the decision-making process. Even if there is some legal basis for shielding the details of the Minister’s methods of investigation, there may still need to be some disclosure to the decision-maker regarding the methods used. Justice Go’s decision is also a rebuke of the RPD which accepted the Minister’s evidence on faith and asked no questions about its methodology or probity. In her decision, Justice Go take serious note of concerns about accuracy and bias in the use of FRT, particularly with racialized individuals, and it is clear that these concerns heighten the need for transparency. The decision is important for setting some basic standards to meet when it comes to reviewing evidence that may have been derived using AI. It is also a sobering reminder that those checks and balances failed at first instance – and in a high stakes context.

Published in Privacy

This is the third in my series of posts on the Artificial Intelligence and Data Act (AIDA) found in Bill C-27, which is part of a longer series on Bill C-27 generally. Earlier posts on the AIDA have considered its purpose and application, and regulated activities. This post looks at the harms that the AIDA is designed to address.

The proposed Artificial Intelligence and Data Act (AIDA), which is the third part of Bill C-27, sets out to regulate ‘high-impact’ AI systems. The concept of ‘harm’ is clearly important to this framework. Section 4(b) of the AIDA states that a purpose of the legislation is “to prohibit certain conduct in relation to artificial intelligence systems that may result in serious harm to individuals or harm to their interests”.

Under the AIDA, persons responsible for high-impact AI systems have an obligation to identify, assess, and mitigate risks of harm or biased output (s. 8). Those persons must also notify the Minister “as soon as feasible” if a system for which they are responsible “results or is likely to result in material harm”. There are also a number of oversight and enforcement functions that are triggered by harm or a risk of harm. For example, if the Minister has reasonable grounds to believe that a system may result in harm or biased output, he can demand the production of certain records (s. 14). If there is a serious risk of imminent harm, the Minister may order a person responsible to cease using a high impact system (s. 17). The Minister is also empowered to make public certain information about a system where he believes that there is a serious risk of imminent harm and the publication of the information is essential to preventing it (s. 28). Elevated levels of harm are also a trigger for the offence in s. 39, which involves “knowing or being reckless as to whether the use of an artificial intelligence system is likely to cause serious physical or psychological harm to an individual or substantial damage to an individual’s property”.

‘Harm’ is defined in s. 5(1) to mean:

(a) physical or psychological harm to an individual;

(b) damage to an individual’s property; or

(c) economic loss to an individual.

I have emphasized the term “individual” in this definition because it places an important limit on the scope of the AIDA. First, it is unlikely that the term ‘individual’ includes a corporation. Typically, the word ‘person’ is considered to include corporations, and the word ‘person’ is used in this sense in the AIDA. This suggests that “individual” is meant to have a different meaning. The federal Interpretation Act is silent on the issue. It is a fair interpretation of the definition of ‘harm’ that “individual” is not the same as “person”, and means an individual (human) person. The French version uses the term “individu”, and not “personne”. The harms contemplated by this legislation are therefore to individuals and not to corporations.

Defining harm in terms of individuals has other ramifications. The AIDA defines high-risk AI systems in terms of their impacts on individuals. Importantly, this excludes groups and communities. It also very significantly focuses on what are typically considered quantifiable harms, and uses language that suggests quantifiability (economic loss, damage to property, physical or psychological harm). Some important harms may be difficult to establish or to quantify. For example, class action lawsuits relating to significant data breaches have begun to wash up on the beach of lost causes due to the impossibility of proving material loss either because, although thousands may have been impacted, the individual losses are impossible to quantify, or because it is impossible to prove a causal link between very real identity theft and that particular data breach. Consider an AI system that manipulates public opinion through an algorithm that drives content to individuals based on its shock value rather than its truth. Say this happens during a pandemic and it convinces people that they should not get vaccinated or take other recommended public health measures. Say some people die because they were misled in this way. Say other people die because they were exposed to infected people who were misled in this way. How does one prove the causal link between the physical harm of injury or death of an individual and the algorithm? What if there is an algorithm that manipulates voter sentiment in a way that changes the outcome of an election? What is the quantifiable economic loss or psychological harm to any individual? How could causation be demonstrated? The harm, once again, is collective.

The EU AI Act has also been criticized for focusing on individual harm, but the wording of that law is still broader than that in the AIDA. The EU AI Act refers to high-risk systems in terms of “harm to the health and safety or a risk of adverse impact on fundamental rights of persons”. This at least introduces a more collective dimension, and it avoids the emphasis on quantifiability.

The federal government’s own Directive on Automated Decision-Making (DADM) which is meant to guide the development of AI used in public sector automated decision systems (ADS) also takes a broader approach to impact. In assessing the potential impact of an ADS, the DADM takes into account: “the rights of individuals or communities”, “the health or well-being of individuals or communities”, “the economic interests of individuals, entities, or communities”, and “the ongoing sustainability of an ecosystem”.

With its excessive focus on individuals, the AIDA is simply tone deaf to the growing global understanding of collective harm caused by the use of human-derived data in AI systems.

One response of the government might be to point out that the AIDA is also meant to apply to “biased output”. Biased output is defined in the AIDA as:

content that is generated, or a decision, recommendation or prediction that is made, by an artificial intelligence system and that adversely differentiates, directly or indirectly and without justification, in relation to an individual on one or more of the prohibited grounds of discrimination set out in section 3 of the Canadian Human Rights Act, or on a combination of such prohibited grounds. It does not include content, or a decision, recommendation or prediction, the purpose and effect of which are to prevent disadvantages that are likely to be suffered by, or to eliminate or reduce disadvantages that are suffered by, any group of individuals when those disadvantages would be based on or related to the prohibited grounds. (s. 5(1)) [my emphasis]

The argument here will be that the AIDA will also capture discriminatory biases in AI. However, I have underlined the part of this definition that once again returns the focus to individuals, rather than groups. It can be very hard for an individual to demonstrate that a particular decision discriminated against them (especially if the algorithm is obscure). In any event, biased AI will tend to replicate systemic discrimination. Although it will affect individuals, it is the collective impact that is most significant – and this should be recognized in the law. The somewhat obsessive focus on individual harm in the AIDA may unwittingly help perpetuate denials of systemic discrimination.

It is also important to note that the definition of “harm” does not include “biased output”, and while the terms are used in conjunction in some cases (for example, in s. 8’s requirement to “identify, assess and mitigate the risks of harm or biased output”), other obligations relate only to “harm”. Since the two are used conjunctively in some parts of the statute, but not others, a judge interpreting the statute might presume that when only one of the terms is used, then it is only that term that is intended. Section 17 of the AIDA allows the Minister to order a person responsible for a high-impact system to cease using it or making it available if there is a “serious risk of imminent harm”. Section 28 permits the Minister to order the publication of information related to an AI system where there are reasonable grounds to believe that the use of the system gives rise to “a serious risk of imminent harm”. In both cases, the defined term ‘harm’ is used, but not ‘biased output’.

The goals of the AIDA to protect against harmful AI are both necessary and important, but in articulating the harm that it is meant to address, the Bill underperforms.

Published in Privacy

As part of my series on Bill C-27, I will be writing about both the proposed amendments to Canada’s private sector data protection law and the part of the Bill that will create a new Artificial Intelligence and Data Act (AIDA). So far, I have been writing about privacy, and my posts on consent, de-identification, data-for-good, and the right of erasure are already available. Posts on AIDA, will follow, although I still have a bit more territory on privacy to cover first. However, in the meantime, as a teaser, perhaps you might be interested in playing a bit of statutory MadLibs…...

Have you ever played MadLibs? It’s a paper-and-pencil game where someone asks the people in the room to supply a verb, noun, adverb, adjective, or body part, and the provided words are used to fill in the blanks in a story. The results are often absurd and sometimes hilarious.

The federal government’s proposal in Bill C-27 for an Artificial Intelligence and Data Act, really lends itself to a game of statutory MadLibs. This is because some of the most important parts of the bill are effectively left blank – either the Minister or the Governor-in-Council is tasked in the Bill with filling out the details in regulations. Do you want to play? Grab a pencil, and here goes:

Company X is developing an AI system that will (insert definition of ‘high impact system). It knows that this system is high impact because (insert how a company should assess impact). Company X has established measures to mitigate potential harms by (insert measures the company took to comply with the regulations) and has also recorded (insert records it kept), and published (insert information to be published).

Company X also had its system audited by an auditor who is (insert qualifications). Company X is being careful, because if it doesn’t comply with (insert a section of the Act for which non-compliance will count as a violation), it could be found to have committed a (insert degree of severity) violation. This could lead to (insert type of proceeding).

Company X, though, will be able to rely on (insert possible defence). However, if (insert possible defence) is unsuccessful, Company X may be liable to pay an Administrative Monetary Penalty if they are a (insert category of ‘person’) and if they have (insert factors to take into account). Ultimately, if they are unhappy with the outcome, they can launch a (insert a type of appeal proceeding).

Because of this regulatory scheme, Canadians can feel (insert emotion) at how their rights and interests are protected.

Published in Privacy

 

Note: The following is my response to the call for submissions on the recommendations following the third review of Canada’s Directive on Automated Decision-Making. Comments are due by June 30, 2022. If you are interested in commenting, please consult the Review Report and the Summary of Key Issues and Proposed Amendments. Comments can be sent to This e-mail address is being protected from spambots. You need JavaScript enabled to view it .

 

The federal Directive on Automated Decision-Making (DADM) and its accompanying Algorithmic Impact Assessment tool (AIA) are designed to provide governance for the adoption and deployment of automated decision systems (ADS) by Canada’s federal government. Governments are increasingly looking to ADS in order to speed up routine decision-making processes and to achieve greater consistency in decision-making. At the same time, there are reasons to be cautious. Automated decision systems carry risks of incorporating and replicating discriminatory bias. They may also lack the transparency required of government decision-making, particularly where important rights or interests are at stake. The DADM, which has been in effect since April 2019 (with compliance mandatory no later than April 2020), sets out a series of obligations related to the design and deployment of automated decision-making systems. The extent of the obligations depends upon a risk assessment, and the AIA is the tool by which the level of risk of the system is assessed.

Given that this is a rapidly evolving area, the DADM provides that it will be reviewed every six months. It is now in its third review. The first two reviews led to the clarification of certain obligations in the DADM and to the development of guidelines to aid in its interpretation. This third review proposes a number of more substantive changes. This note comments on some of these changes and proposes an issue for future consideration.

Clarify and Broaden the Scope

A key recommendation in this third round of review relates to the scope of the DADM. Currently, the DADM applies only to ‘external’ services of government – in other words services offered to individuals or organizations by government. It does not apply internally. This is a significant gap when one considers the expanding use of ADS in the employment context. AI-enabled decision systems have been used in hiring processes, and they can be used to conduct performance reviews, and to make or assist in decision-making about promotions and internal workforce mobility. The use of AI tools in the employment context can have significant impacts on the lives and careers of employees. It seems a glaring oversight to not include such systems in the governance regime for ADM. The review team has recommended expanding the scope of the DADM to include internal as well as external services. They note that this move would also extend the DADM to any ADS used for “grants and contributions, awards and recognition, and security screening” (Report at 11). This is an important recommendation and one which should be implemented.

The review team also recommends a clarification of the language regarding the application of the DADM. Currently it puts within its scope “any system, tool, or statistical models used to recommend or make an administrative decision about a client”. Noting that “recommend” could be construed as including only those systems that recommend a specific outcome, as opposed to systems that process information on behalf of a decision-maker, the team proposes replacing “recommend” with “support”. This too is an important recommendation which should be implemented.

Periodic Reviews

Currently the DADM provides for its review every six months. This was always an ambitious review schedule. No doubt it was motivated by the fact that the DADM was a novel tool designed to address a rapidly emerging and evolving technology with potentially significant implications. The idea was to ensure that it was working properly and to promptly address any issues or problems. In this third review, however, the team recommends changing the review period from six months to two years. The rationale is that the six-month timetable makes it challenging for the team overseeing the DADM (which is constantly in a review cycle), and makes it difficult to properly engage stakeholders. They also cite the need for the DADM to “display a degree of stability and reliability, enabling federal institutions and the clients they serve to plan and act with a reasonable degree of confidence.” (Report at 12).

This too is a reasonable recommendation. While more frequent reviews were important in the early days of the DADM and the AIA, reviews every six months seem unduly burdensome once initial hiccups are resolved. A six-month review cycle engages the team responsible for the DADM in a constant cycle of review, which may not be the best use of resources. The proposed two-year review cycle would allow for a more experience to be garnered with the DADM and AIA, enabling a more substantive assessment of issues arising. Further, a two-year window is much more realistic if stakeholders are to be engaged in a meaningful way. Being asked to comment on reports and proposed changes every six months seems burdensome for anyone – including an already stretched civil society sector. The review document suggests that Canada’s Chief Information Officer could request completion of an off-cycle review if the need arose, leaving room for the possibility that a more urgent issue could be addressed outside of the two-year review cycle.

Data Model and Governance

The third review also proposes amendments to provide for what it describes as a more ‘holistic’ approach to data governance. Currently, the DADM focuses on data inputs – in other words on assessing the quality, relevance and timeliness of the data used in the model. The review report recommends the addition of an obligation to establish “measures to ensure that data used and generated by the Automated Decision System are traceable, protected, and appropriately retained and disposed of in accordance with the Directive on Service and Digital, Directive on Privacy Practices, and Directive on Security Management”. It will also recommend amendments to extend testing and assessment beyond data to underlying models, in order to assess both data and algorithms for bias or other problems. These are positive amendments which should be implemented.

Explanation

The review report notes that while the DADM requires “meaningful explanations” of how automated decisions were reached, and while guidelines provide some detail as to what is meant by explainability, there is still uncertainty about what explainability entails. The Report recommends adding language in Appendix C, in relation to impact assessment, that will set out the information necessary for ‘explainability’. This includes:

  • The role of the system in the decision-making process;
  • The training and client data, their source and method of collection, if applicable;
  • The criteria used to evaluate client data and the operations applied to process it; and
  • The output produced by the system and any relevant information needed to interpret it in the context of the administrative decision.

Again, this recommendation should be implemented.

Reasons for Automation

The review would also require those developing ADM systems for government to specifically identify why it was considered necessary or appropriate to automate the existing decision-making process. The Report refers to a “clear and demonstrable need”. This is an important additional criterion as it requires transparency as to the reasons for automation – and that these reasons go beyond the fact that vendor-demonstrated technologies look really cool. As the authors of the review note, requiring justification also helps to assess the parameters of the system adopted – particularly if the necessity and proportionality approach favoured by the Office of the Privacy Commissioner of Canada is adopted.

Transparency

The report addresses several issues that are relevant to the transparency dimensions of the DADM and the accompanying AIA. Transparency is an important element of the DADM, and it is key both to the legitimacy of the adoption of ADS by government, but also to its ongoing use. Without transparency in government decision-making that impacts individuals, organizations and communities, there can be no legitimacy. There are a number of transparency elements that are built into the DADM. For example, there are requirements to provide notice of automated decision systems, a right to an explanation of decisions that is tailored to the impact of the decision, and a requirement not just to conduct an AIA, but to publish the results. The review report includes a number of recommendations to improve transparency. These include a recommendation to clarify when an AIA must be completed and released, greater transparency around peer review results, more explicit criteria for explainability, and adding additional questions to the AIA. These are all welcome recommendations.

At least one of these recommendations may go some way to allaying my concerns with the system as it currently stands. The documents accompanying the report (slide 3 of summary document) indicate that there are over 300 AI projects across 80% of federal institutions. However, at the time of writing, only four AIAs were published on the open government portal. There is clearly a substantial lag between development of these systems and release of the AIAs. The recommendation that an AIA be not just completed but also released prior to the production of the system is therefore of great importance to ensuring transparency.

It may be that some of the discrepancy in the numbers is attributable to the fact that the DADM came into effect in 2020, and it was not grandfathered in for projects already underway. For transparency’s sake, I would also recommend that a public register of ADS be created that contains basic information about all government ADS. This could include their existence and function, as well as some transparency regarding explainability, the reasons for adoption, and measures taken to review, assess and ensure the reliability of these systems. Although it is too late, in the case of these systems, to perform a proactive AIA, there should be some form of reporting tool that can be used to provide important information, for transparency purposes, to the public.

Consideration for the Future

The next review of the DADM and the AIA should also involve a qualitative assessment of the AIAs that have been published to date. If the AIA is to be a primary tool not just for assessing ADS but for providing transparency about them, then they need to be good. Currently there is a requirement to conduct an AIA for a system within the scope of the DADM – but there is no explicit requirement for it to be of a certain quality. A quick review of the four AIAs currently available online shows some discrepancy between them in terms of the quality of the assessment. For example, the project description for one such system is an unhelpful 9-word sentence that does not make clear how AI is actually part of the project. This is in contrast to another that describes the project in a 14-line paragraph. These are clearly highly divergent in terms of the level of clarity and detail provided.

The first of these two AIAs also seems to contain contradictory answers to the AIA questionnaire. For example, the answer to the question “Will the system only be used to assist a decision-maker” is ‘yes’. Yet the answer to the question “Will the system be replacing a decision that would otherwise be made by a human” is also ‘yes’. Either one of these answers is incorrect, or the answers do not capture how the respondent interpreted these questions. These are just a few examples. It is easy to see how use of the AIA tool can range from engaged to pro forma.

The obligations imposed on departments with respect to ADS vary depending upon the risk assessment score. This score is evaluated through the questionnaire, and one of the questions asks “Are clients in this line of business particularly vulnerable?” In the AIA for an access to information (ATIP) tool, the answer given to this question is “no”. Of course, the description of the tool is so brief that it is hard to get a sense of how it functions. However, I would think that the clientele for an ATIP portal would be quite diverse. Some users will be relatively sophisticated (e.g., journalists or corporate users). Others will be inexperienced. For some of these, information sought may be highly important to them as they may be seeking access to government information to right a perceived wrong, to find out more about a situation that adversely impacts them, and so on. In my view, this assessment of the vulnerability of the clients is not necessarily accurate. Yet the answer provided contributes to a lower overall score and thus a lower level of accountability. My recommendation for the next round of reviews is to assess the overall effectiveness of the AIA tool in terms of the information and answers provided and in terms of their overall accuracy.

I note that the review report recommends adding questions to the AIA in order to improve the tool. Quite a number of these are free text answers, which require responses to be drafted by the party completing the AIA. Proposed questions include ones relating to the user needs to be addressed, how the system will meet those needs, and the effectiveness of the system in meeting those needs, along with reasons for this assessment. Proposed questions will also ask whether non-AI-enabled solutions were also considered, and if so, why AI was chosen as the preferred method. A further question asks what the consequences would be of not deploying the system. This additional information is important both to assessing the tool and to providing transparency. However, as noted above, the answers will need to be clear and sufficiently detailed in order to be of any use.

The AIA is crucial to assessing the level of obligation and to ensuring transparency. If AIAs are pro forma or excessively laconic, then the DADM can be as finely tuned as can be, but it will still not achieve desired results. The review committee’s recommendation that plain language summaries of peer review assessments also be published will provide a means of assessing the quality of the AIAs, and thus it is an important recommendation to strengthen both transparency and compliance.

A final issue that I would like to address is that, to achieve transparency, people will need to be able to easily find and access the information about the systems. Currently, AIAs are published on the Open Government website. There, they are listed alphabetically by title. This is not a huge problem right now, since there are only four of them. As more are published, it would be helpful to have a means of organizing them by department or agency, or by other criteria (including risk/impact score) to improve their findability and usability. Further, it will be important that any peer review summaries are linked to the appropriate AIAs. In addition to publication on the open government portal, links to these documents should be made available from department, agency or program websites. It would also be important to have an index or registry of AI in the federal sector – including not just those projects covered by the DADM, but also those in production prior to the DADM’s coming into force.

[Note: I have written about the DADM and the AIA from an administrative law perspective. My paper, which looks at the extent to which the DADM addresses administrative law concerns regarding procedural fairness, can be found here.]

Published in Privacy

 

Ontario has just released its Beta principles for the ethical use of AI and data enhanced technologies in Ontario. These replace the earlier Alpha principles, and are revised based upon commentary and feedback on the Alpha version. Note that these principles are designed for use in relation to AI technologies adopted for the Ontario public sector.

Below you will find a comparison table I created to provide a quick glance at what has been changed since the previous version. I have flagged significant additions with italics in the column for the Beta version. I have also flagged some words or concepts that have disappeared in the Beta version by using strikethrough in the column with the Alpha version. I have focused on the principles, and have not flagged changes to the “Why it Matters” section of each principle.

One important change to note is that the Beta version now refers not just to technologies used to make decisions, but also technologies used to assist in decision-making.

 

 

Principles for Ethical Use [Alpha]

Principles for Ethical Use [Beta]

The alpha Principles for Ethical Use set out six points to align the use of data-driven technologies within government processes, programs and services with ethical considerations and values. Our team has undertaken extensive jurisdictional scans of ethical principles across the world, in particular the US the European Union and major research consortiums. The Ontario “alpha” principles complement the Canadian federal principles by addressing a gap concerning specificity. Ontario’s principles support our diverse economic ecosystem by not clashing with existing best practices, principles and frameworks. This approach references and harmonizes with known standards, principles and tools to create clarity rather than barriers for innovation that is safe, responsible and beneficial.

 

These Principles for Ethical Use set out six points to align the use of data enhanced technologies within government processes, programs and services with ethical considerations and values.

 

The Trustworthy AI team within Ontario’s Digital Service has undertaken extensive jurisdictional scans of ethical principles across the world, in particular New Zealand, the United States, the European Union and major research consortiums.

 

The Ontario “beta” principles complement the Canadian federal principles by addressing a gap concerning specificity. Ontario’s principles support our diverse economic ecosystem by not clashing with existing best practices, principles and frameworks. This approach references and harmonizes with known standards, principles and tools to create clarity rather than barriers for innovation that is safe, responsible and beneficial.

 

We’re in the early days of bringing these principles to life. We encourage you to adopt as much of the principles as possible, and to share your feedback with us. You can email This e-mail address is being protected from spambots. You need JavaScript enabled to view it for more details.

 

You can also check out the Transparency Guidelines (GitHub).

1. Transparent and Explainable

 

There must be transparent and responsible disclosure around data-driven technology like Artificial Intelligence (AI), automated decisions and machine learning (ML) systems to ensure that people understand outcomes and can discuss, challenge and improve them.

 

 

Where automated decision making has been used to make individualized and automated decisions about humans, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject should be available.

 

Why it Matters

 

There is no way to hold data-driven technologies accountable, particularly as they impact various historically disadvantaged groups if the public is unaware of the algorithms and automated decisions the government is making. Transparency of use must be accompanied with plain language explanations for the public to have access to and not just the technical or research community. For more on this, please consult the Transparency Guidelines.

 

1. Transparent and explainable

 

There must be transparent use and responsible disclosure around data enhanced technology like AI, automated decisions and machine learning systems to ensure that people understand outcomes and can discuss, challenge and improve them. This includes being open about how and why these technologies are being used.

 

When automation has been used to make or assist with decisions, a meaningful explanation should be made available. The explanation should be meaningful to the person requesting it. It should include relevant information about what the decision was, how the decision was made, and the consequences.

 

Why it matters

 

Transparent use is the key principle that helps enable other principles while building trust and confidence in government use of data enhanced technologies. It also encourages a dialogue between those using the technology and those who are affected by it.

 

Meaningful explanations are important because they help people understand and potentially challenge outcomes. This helps ensure decisions are rendered fairly. It also helps identify and reverse adverse impacts on historically disadvantaged groups.

 

For more on this, please consult the Transparency Guidelines.

 

2. Good and Fair

 

Data-driven technologies should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards to ensure a fair and just society.

 

Designers, policy makers and developers should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognized labor rights.

 

Why it matters

 

Algorithmic and machine learning systems evolve through their lifecycle and as such it is important for the systems in place and technologies to be good and fair at the onset, in their data inputs and throughout the lifecycle of use. The definitions of good and fair are intentionally vague to allow designers and developers to consider all of the users both directly and indirectly impacted by the deployment of an automated decision making system.

 

2. Good and fair

 

Data enhanced technologies should be designed and operated in a way throughout their life cycle that respects the rule of law, human rights, civil liberties, and democratic values. These include dignity, autonomy, privacy, data protection, non-discrimination, equality, and fairness.

 

Why it matters

 

Algorithmic and machine learning systems evolve through their lifecycle and as such it is important for the systems in place and technologies to be good and fair at the onset, in their data inputs and throughout the life cycle of use. The definitions of good and fair are intentionally broad to allow designers and developers to consider all of the users both directly and indirectly impacted by the deployment of an automated decision making system.

 

3. Safe

 

Data-driven technologies like AI and ML systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.

 

Designers and developers should implement mechanisms and safeguards, such as capacity for human determination and complete halt of the system operations, that are appropriate to the context and predetermined at initial deployment.

 


Why it matters

Creating safe data-driven technologies means embedding safeguards throughout the life cycle of the deployment of the algorithmic system. Automated algorithmic decisions can reflect and amplify undesirable patterns in the data they are trained on. Despite our best efforts there will be unexpected outcomes and impacts. Systems will require ongoing monitoring and mitigation planning to ensure that if the algorithmic system is making decisions that are no longer agreeable that a human can adapt, correct or improve the system.

3. Safe

 

Data enhanced technologies like AI and ML systems must function in a safe and secure way throughout their life cycles and potential risks should be continually assessed and managed.

 

Designers, policy makers and developers should embed appropriate safeguards throughout the life cycle of the system to ensure it is working as intended. This would include mechanisms related to system testing, piloting, scaling and human intervention as well as alternative processes in case a complete halt of system operations is required. The mechanisms must be appropriate to the context and determined before deployment but should be iterated upon throughout the system’s life cycle.

 

Why it matters

Automated algorithmic decisions can reflect and amplify undesirable patterns in the data they are trained on. As well, issues with the system can arise that only become apparent after the system is deployed.

 

Therefore, despite our best efforts unexpected outcomes and impacts need to be considered. Accordingly, systems will require ongoing monitoring and mitigation planning to ensure that if the algorithmic system is making decisions that are not intended, a human can adapt, correct or improve the system.

 

4. Accountable and Responsible

 

Organizations and individuals developing, deploying or operating AI systems should be held accountable for their ongoing proper functioning in line with the above principles. Algorithmic systems should be periodically peer-reviewed or audited to ensure that unwanted biases have not inadvertently crept in over time.

 

Where AI is used to make decisions about individuals there needs to be a process for redress to better understand how a given decision was made.

 

Why it matters

 

In order for there to be accountability for decisions that are made by an AI or ML system a person, group of people or organization needs to be identified prior to deployment. This ensures that if redress is needed there is a preidentified entity that is responsible and can be held accountable for the outcomes of the algorithmic systems.

 

4. Accountable and responsible

 

Organizations and individuals developing, deploying or operating AI systems should be held accountable for their ongoing proper functioning in line with the other principles. Human accountability and decision making over AI systems within an organization needs to be clearly identified, appropriately distributed and actively maintained throughout the system’s life cycle. An organizational culture around shared ethical responsibilities over the system must also be promoted.

 

Where AI is used to make or assist with decisions, a public and accessible process for redress should be designed, developed, and implemented with input from a multidisciplinary team and affected stakeholders. Algorithmic systems should also be regularly peer-reviewed or audited to ensure that unwanted biases have not inadvertently crept in over time.

 

Why it matters

 

Identifying and appropriately distributing accountability within an organization helps ensure continuous human oversight over the system is properly maintained. In addition to clear roles related to accountability, it is also important to promote an organizational culture around shared ethical responsibilities. This helps prevent gaps and avoids the situation where ethical considerations are always viewed as someone else’s responsibility.

 

While our existing legal framework includes numerous traditional processes of redress related to governmental decision making, AI systems can present unique challenges to those traditional processes with their complexity. Input from a multidisciplinary team and affected stakeholders will help identify those issues in advance and design appropriate mechanisms to mitigate them.

 

Regular peer review of AI systems is also important. Issues around bias may not be evident when AI systems are initially designed or developed, so it's important to consider this requirement throughout the lifecycle of the system.

 

5. Human Centric

 

The processes and outcomes behind an algorithm should always be developed with human users as the main consideration. Human centered AI should reflect the information, goals, and constraints that a human decision-maker weighs when arriving at a decision.

 

Keeping human users at the center entails evaluating any outcomes (both direct and indirect) that might affect them due to the use of the algorithm. Contingencies for unintended outcomes need to be in place as well, including removing the algorithms entirely or ending their application.

 

Why it matters

 

Placing the focus on human user ensures that the outcomes do not cause adverse effects to users in the process of creating additional efficiencies.

 

In addition, Human-centered design is needed to ensure that you are able to keep a human in the loop when ensuring the safe operation of an algorithmic system. Developing algorithmic systems with the user in mind ensures better societal and economic outcomes from the data-driven technologies.

 

5. Human centric

 

AI systems should be designed with a clearly articulated public benefit that considers those who interact with the system and those who are affected by it. These groups should be meaningfully engaged throughout the system’s life cycle, to inform development and enhance operations. An approach to problem solving that embraces human centered design is strongly encouraged.

 

Why it matters

 

Clearly articulating a public benefit is an important step that enables meaningful dialogue early with affected groups and allows for measurement of success later.

 

Placing the focus on those who interact with the system and those who are affected by it ensures that the outcomes do not cause adverse effects in the process of creating additional efficiencies.

 

Developing algorithmic systems that incorporate human centred design will ensure better societal and economic outcomes from the data enhanced technologies.

 

6. Sensible and Appropriate

 

Data-driven technologies like AI or ML shall be developed with consideration of how it may apply to specific sectors or to individual cases and should align with the Canadian Charter of Human Rights and Freedoms and with Federal and Provincial AI Ethical Use.

 

Other biproducts of deploying data-driven technologies such as environmental, sustainability, societal impacts should be considered as they apply to specific sectors and use cases and applicable frameworks, best practices or laws.

 

Why it matters

 

Algorithmic systems and machine learning applications will differ by sector and user. As a result, while the above principles are a good starting point for developing ethical data-driven technologies it is important that additional considerations be given to the specific sectors and environments to which the algorithm is applied.

 

Experts in both technology and ethics should be consulted in development of data-driven technologies such as AI to guard against any adverse effects (including societal, environmental and other long-term effects).

6. Sensible and appropriate

 

Every data enhanced system exists not only within its use case, but also within a particular sector of society and a broader context that can feel its impact. Data enhanced technologies should be designed with consideration of how they may apply to a particular sector along with awareness of the broader context. This context could include relevant social or discriminatory impacts.

 

Why it matters

 

Algorithmic systems and machine learning applications will differ by sector. As a result, while the above principles are a good starting point for developing ethical data enhanced technologies it is important that additional considerations be given to the specific sectors to which the algorithm is applied.

 

Encouraging sector specific guidance also helps promote a culture of shared ethical responsibilities and a dialogue around the important issues raised by data enhanced systems.

 

Published in Privacy

 

On December 7, 2021, the privacy commissioners of Quebec, British Columbia and Alberta issued orders against the US-based company Clearview AI, following its refusal to voluntarily comply with the findings in the joint investigation report they issued along with the federal privacy commissioner on February 3, 2021.

Clearview AI gained worldwide attention in early 2020 when a New York Times article revealed that its services had been offered to law enforcement agencies for use in a largely non-transparent manner in many countries around the world. Clearview AI’s technology also has the potential for many different applications including in the private sector. It built its massive database of over 10 billion images by scraping photographs from publicly accessible websites across the Internet, and deriving biometric identifiers from the images. Users of its services upload a photograph of a person. The service then analyzes that image and compares it with the stored biometric identifiers. Where there is a match, the user is provided with all matching images and their metadata, including links to the sources of each image.

Clearview AI has been the target of investigation by data protection authorities around the world. France’s Commission Nationale de l'Informatique et des Libertés has found that Clearview AI breached the General Data Protection Regulation (GDPR). Australia and the UK conducted a joint investigation which similarly found the company to be in violation of their respective data protection laws. The UK commissioner has since issued a provisional view, stating its intent to levy a substantial fine. Legal proceedings are currently underway in Illinois, a state which has adopted biometric privacy legislation. Canada’s joint investigation report issued by the federal, Quebec, B.C. and Alberta commissioners found that Clearview AI had breached the federal Personal Information Protection and Electronic Documents Act, as well as the private sector data protection laws of each of the named provinces.

The Canadian joint investigation set out a series of recommendations for Clearview AI. Specifically, it recommended that Clearview AI cease offering its facial recognition services in Canada, “cease the collection, use and disclosure of images and biometric facial arrays collected from individuals in Canada”, and delete any such data in its possession. Clearview AI responded by saying that it had temporarily ceased providing its services in Canada, and that it was willing to continue to do so for a further 18 months. It also indicated that if it offered services in Canada again, it would require its clients to adopt a policy regarding facial recognition technology, and it would offer an audit trail of searches.

On the second and third recommendations, Clearview AI responded that it was simply not possible to determine which photos in its database were of individuals in Canada. It also reiterated its view that images found on the Internet are publicly available and free for use in this manner. It concluded that it had “already gone beyond its obligations”, and that while it was “willing to make some accommodations and met some of the requests of the Privacy Commissioners, it cannot commit itself to anything that is impossible and or [sic] required by law.” (Letter reproduced at para 3 of Order P21-08).

In this post I consider three main issues that flow from the orders issued by the provincial commissioners. The first relates to the cross-border reach of Canadian law. The second relates to enforcement (or lack thereof) in the Canadian context, particularly as compared with what is available in other jurisdictions such as the UK and the EU. The third issue relates to the interest shown by the commissioners in a compromise volunteered by Clearview AI in the ongoing Illinois litigation – and what this might mean for Canadians’ privacy.

 

1. Jurisdiction

Clearview AI maintains that Canadian laws do not apply to it. It argues that it is a US-based company with no physical presence in Canada. Although it initially provided its services to Canadian law enforcement agencies (see this CBC article for details of the use of Clearview by Toronto Police Services), it had since ceased to do so – thus, it no longer had clients in Canada. It scraped its data from platform companies such as Facebook and Instagram, and while many Canadians have accounts with such companies, Clearview’s scraping activities involved access to data hosted on platforms outside of Canada. It therefore argued that it not only did not operate in Canada, it had no ‘real and substantial’ connection to Canada.

The BC Commissioner did not directly address this issue. In his Order, he finds a hook for jurisdiction by referring to the personal data as having been “collected from individuals in British Columbia without their consent”, although it is clear there is no direct collection. He also notes Clearview’s active contemplation of resuming its services in Canada. Alberta’s Commissioner makes a brief reference to jurisdiction, simply stating that “Provincial privacy legislation applies to any private sector organization that collects, uses and discloses information of individuals within that province” (at para 12). The Quebec Commissioner, by contrast, gives a thorough discussion of the jurisdictional issues. In the first place, she notes that some of the images came from public Quebec sources (e.g., newspaper websites). She also observes that nothing indicates that images scraped from Quebec sources have been removed from the database; they therefore continue to be used and disclosed by the company.

Commissioner Poitras cited the Federal Court decision in Lawson for the principle that PIPEDA could apply to a US-based company that collected personal information from Canadian sources – so long as there is a real and substantial connection to Canada. She found a connection to Quebec in the free accounts offered to, and used by, Quebec law enforcement officials. She noted that the RCMP, which operates in Quebec, had also been a paying client of Clearview’s. When Clearview AI was used by clients in Quebec, those clients uploaded photographs to the service in the search for a match. This also constituted a collection of personal information by Clearview AI in Quebec.

Commissioner Poitras found that the location of Clearview’s business and its servers is not a determinative jurisdictional factor for a company that offers its services online around the world, and that collects personal data from the Internet globally. She found that Clearview AI’s database was at the core of its services, and a part of that database was comprised of data from Quebec and about Quebeckers. Clearview had offered its service in Quebec, and its activities had a real impact on the privacy of Quebeckers. Commissioner Poitras noted that millions of images of Quebeckers were appropriated by Clearview without the consent of the individuals in the images; these images were used to build a global biometric facial recognition database. She found that it was particularly important not to create a situation where individuals are denied recourse under quasi-constitutional laws such as data protection laws. These elements in combination, in her view, would suffice to create a real and substantial connections.

Commissioner Poitras did not accept that Clearview’s suspension of Canadian activities changed the situation. She noted that information that had been collected in Quebec remained in the database, which continued to be used by the company. She stated that a company could not appropriate the personal information of a substantial number of Quebeckers, commercialise this information, and then avoid the application of the law by saying they no longer offered services in Quebec.

The jurisdictional questions are both important and thorny. This case is different from cases such as Lawson and Globe24hrs, where the connections with Canada were more straightforward. In Lawson, there was clear evidence that the company offered its services to clients in Canada. It also directly obtained some of its data about Canadians from Canadian sources. In Globe24hrs, there was likewise evidence that Canadians were being charged by the Romanian company to have their personal data removed from the database. In addition, the data came from Canadian court decisions that were scraped from websites located in Canada. In Clearview AI, while some of the scraped data may have been hosted on servers located in Canada, most were scraped from offshore social media platform servers. If Clearview AI stopped offering its services in Canada and stopped scraping data from servers located in Canada, what recourse would Canadians have? The Quebec Commissioner attempts to address this question, but her reasons are based on factual connections that might not be present in the future, or in cases involving other data-scraping respondents. There needs to be a theory of real and substantial connection that specifically addresses the scraping of data from third-party websites, contrary to those websites’ terms of use, and contrary to the legal expectations of the sites’ users that can anchor the jurisdiction of Canadian law, even when the scraper has no other connection to Canada.

Canada is not alone with these jurisdictional issue – Australia’s orders to Clearview AI are currently under appeal, and the jurisdiction of the Australian Commissioner to make such orders will be one of the issues on appeal. A jurisdictional case – one that is convincing not just to privacy commissioners but to the foreign courts that may have to one day determine whether to enforce Canadian decisions – needs to be made.

 

2. Enforcement

At the time the facts of the Clearview AI investigation arose, all four commissioners had limited enforcement powers. The three provincial commissioners could issue orders requiring an organization to change its practices. The federal commissioner has no order-making powers, but can apply to Federal Court to ask that court to issue orders. The relative impotence of the commissioners is illustrated by Clearview’s hubristic response, cited above, that indicates that it had already “gone beyond its obligations”. Clearly, it considers anything that the commissioners had to say on the matter did not amount to an obligation.

The Canadian situation can be contrasted with that in the EU, where commissioners’ orders requiring organizations to change their non-compliant practices are now reinforced by the power to levy significant administrative monetary penalties (AMPs). The same situation exists in the UK. There, the data commissioner has just issued a preliminary enforcement notice and a proposed fine of £17M against Clearview AI. As noted earlier, the enforcement situation is beginning to change in Canada – Quebec’s newly amended legislation permits the levying of substantial AMPs. When some version of Bill C-11 is reintroduced in Parliament in 2022, it will likely also contain the power to levy AMPs. BC and Alberta may eventually follow suit. When this happens, the challenge will be first, to harmonize enforcement approaches across those jurisdictions; and second, to ensure that these penalties can meaningfully be enforced against offshore companies such as Clearview AI.

On the enforcement issue, it is perhaps also worth noting that the orders issued by the three Commissioners in this case are all slightly different. The Quebec Commissioner orders Clearview AI to cease collecting images of Quebeckers without consent, and to cease using these images to create biometric identifiers. It also orders the destruction, within 90 days of receipt of the order, all of the images collected without the consent of Quebeckers, as well as the destruction of the biometric identifiers. Alberta’s Commissioner orders that Clearview cease offering its services to clients in Alberta, cease the collection and use of images and biometrics collected from individuals in Alberta, and delete the same from its databases. BC’s order prohibits the offering of Clearview AI’s services using data collected from British Columbians without their consent to clients in British Columbia. He also orders that Clearview AI use “best efforts” to cease its collection, use and disclosure of images and biometric identifiers of British Columbians without its consent, as well as to use the same “best efforts” to delete images and biometric identifiers collected without consent.

It is to these “best efforts” that I next turn.

 

3. The Illinois Compromise

All three Commissioners make reference to a compromise offered by Clearview AI in the course of ongoing litigation in Illinois under Illinois’ Biometric Information Privacy Act. By referring to “best efforts” in his Order, the BC Commissioner seems to be suggesting that something along these lines would be an acceptable compromise in his jurisdiction.

In its response to the Canadian commissioners, Clearview AI raised the issue that it cannot easily know which photographs in its database are of residents of particular provinces, particularly since these are scraped from the Internet as a whole – and often from social media platforms hosted outside Canada.

Yet Clearview AI has indicated that it has changed some of its business practices to avoid infringing Illinois law. This includes “cancelling all accounts belonging to any entity based in Illinois” (para 12, BC Order). It also includes blocking from any searches all images in the Clearview database that are geolocated in Illinois. In the future, it also offers to create a “geofence” around Illinois. This means that it “will not collect facial vectors from any scraped images that contain metadata associating them with Illinois” (para 12 BC Order). It will also “not collect facial vectors from images stored on servers that are displaying Illinois IP addresses or websites with URLs containing keywords such as “Chicago” or “Illinois”.” Clearview apparently offers to create an “opt-out” mechanism whereby people can ask to have their photos excluded from the database. Finally, it will require its clients to not upload photos of Illinois residents. If such a photo is uploaded, and it contains Illinois-related metadata, no search will be performed.

The central problem with accepting the ‘Illinois compromise’ is that it allows a service built on illegally scraped data to continue operating with only a reduced privacy impact. Ironically, it also requires individuals who wish to benefit from this compromise, to provide more personal data in their online postings. Many people actually suppress geolocation information from their photographs to protect their privacy. Ironically, the ‘Illinois compromise’ can only exclude photos that contain geolocation data. Even with geolocation turned on, it would not exclude the vacation pics of any BC residents taken outside of BC (for example). Further, limiting scraping of images from Illinois-based sites will not prevent the photos of Illinois-based individuals from being included within the database a) if they are already in there, and b) if the images are posted on social media platforms hosted elsewhere.

Clearview AI is a business built upon data collection practices that are illegal in a large number of countries outside the US. The BC Commissioner is clearly of the opinion that a compromise solution is the best that can be hoped for, and he may be right in the circumstances. Yet it is a bitter pill to think that such flouting of privacy laws will ultimately be rewarded, as Clearview gets to keep and commercialize its facial recognition database. Accepting such a compromise could limit the harms of the improper exploitation of personal data, but it does not stop the exploitation of that data in all circumstances. And even this unhappy compromise may be out of reach for Canadians given the rather toothless nature of our current laws – and the jurisdictional challenges discussed earlier.

If anything, this situation cries out for global and harmonized solutions. Notably it requires the US to do much more to bring its wild-west approach to personal data exploitation in line with the approaches of its allies and trading partners. It also will require better cooperation on enforcement across borders. It may also call for social media giants to take more responsibility when it comes to companies that flout their terms and conditions to scrape their sites for personal data. The Clearview AI situation highlights these issues – as well as the dramatic impacts data misuse may have on privacy as personal data continues to be exploited for use in powerful AI technologies.

Published in Privacy

 

It has been quite a while since I posted to my blog. The reason has simply been a crushing workload that has kept me from writing anything that did not have an urgent deadline! In the meantime, so much has been going on in terms of digital and data law and policy in Canada and around the world. I will try to get back on track!

Artificial intelligence (AI) has been garnering a great deal of attention globally –for its potential to drive innovation, its capacity to solve urgent challenges, and its myriad applications across a broad range of sectors. In an article that is forthcoming in the Canadian Journal of Law and Technology, Bradley Henderson, Colleen Flood and I examine issues of algorithmic and data bias leading to discrimination in the healthcare context. AI technologies have tremendous potential across the healthcare system – AI innovation can improve workflows, enhance diagnostics, accelerate research and refine treatment. Yet at the same time, AI technologies bring with them many concerns, among them, bias and discrimination.

Bias can take many forms. In our paper, we focus on those manifestations of bias that can lead to discrimination of the kind recognized in human rights legislation and the Charter. Discrimination can arise either from flawed assumptions being coded into algorithms, from adaptive AI that makes its own correlations, or from unrepresentative data (or from a combination of these).

There are some significant challenges when it comes to the data used to train AI algorithms. Available data may reflect existing disparities and discrimination within the healthcare system. For example, some communities may be underrepresented in the data because of lack of adequate access to healthcare, or from a lack of trust in the healthcare system that tends to keep them away until health issues become acute. Lack of prescription drug coverage or access to paid sick leave may also impact when and how people access health care services. Racial or gender bias in terms of how symptoms or concerns are recorded or how illness is diagnosed can also affect the quality and representativeness of existing stores of data. AI applications developed and trained on data from US-based hospitals may reflect the socio-economic biases that impact access to health care in the US. It may also be questionable the extent to which they are generalizable to the Canadian population or sub-populations. In some cases, data about race or ethnicity may be important markers for understanding diseases and how they manifest themselves but these data may be lacking.

There are already efforts afoot to ensure better access to high quality health data for research and innovation in Canada, and our paper discusses some of these. Addressing data quality and data gaps is certainly one route to tackling bias and discrimination in AI. Our paper also looks at some of the legal and regulatory mechanisms available. On the legal front, we note that there are some recourses available where things go wrong, including human rights complaints, lawsuits for negligence, or even Charter challenges. However, litigating the harms caused by algorithms and data is likely to be complex, expensive, and fraught with difficulty. It is better by far to prevent harms than to push a system to improve itself after costly litigation. We consider the evolving regulatory landscape in Canada to see what approaches are emerging to avoid or mitigate harms. These include regulatory approaches for AI-enabled medical devices, and advanced therapeutic products. However, these systems focus on harms to human health, and would not apply to AI tools developed to improve access to healthcare, manage workflows, conduct risk assessments, and so on. There are regulatory gaps, and we discuss some of these. The paper also makes recommendations regarding improving access to better data for research and innovation, with the accompanying necessary enhancements to privacy laws and data governance regimes to ensure the protection of the public.

One of the proposals made in the paper is that bias and discrimination in healthcare-related AI applications should be treated as a safety issue, bringing a broader range of applications under Health Canada regulatory regimes. We also discuss lifecycle regulatory approaches (as opposed to one-off approvals), and providing warnings about data gaps and limitations. We also consider enhanced practitioner licensing and competency frameworks, requirements at the procurement stage, certification standards and audits. We call for law reform to human rights legislation which is currently not well-adapted to the AI context.

In many ways, this paper is just a preliminary piece. It lays out the landscape and identifies areas where there are legal and regulatory gaps and a need for both law reform and regulatory innovation. The paper is part of the newly launched Machine MD project at uOttawa, which is funded by the Canadian Institutes for Health Research, and that will run for the next four years.

The full pre-print text of the article can be found here.

Published in Privacy

 

The following is my submission to the Ontario government's Consultation on Developing Ontario's Artificial Intelligence (AI) Framework. The Consultation closed on June 4, 2021.


Thank you for the opportunity to provide input on the development of trustworthy AI in Ontario. Due to time pressures my comments will be relatively brief. Hopefully there will be other opportunities to engage with this process.

Developing a framework for the governance of AI in Ontario is important, and it is good to see that this work is underway in Ontario. I note that the current consultation focuses on AI for use in the public sector. Similar work needs to be done for the governance of AI that will be developed and deployed in the private sector context. I hope that this work is also being contemplated.

As I am sure you know, the federal government has already developed a Directive on Automated Decision-Making (DADM) which applies to a broad range of uses of AI in the federal public sector context. It comes with an algorithmic impact assessment tool. Although I appreciate the sensitivities around sovereignty within a province’s own spheres of competence, there is much to be said for more unified national approaches to many regulatory issues – particularly in the digital context. One option for Ontario is to use the DADM as a starting point for its approach to public sector AI governance, and to assess and adapt it for use in Ontario. This would allow Ontario to take advantage of an approach that is already well developed, and into which a considerable amount of thoughtful work has been invested. It is both unnecessary and counterproductive to reinvent the wheel. Serious consideration should be given – as a matter of public policy – to adopting, where possible, harmonized approaches to the governance of digital technologies.

At the same time, I note that the consultation document suggests that Ontario might go beyond a simple internal directive and actually provide an accountability framework that would give individuals direct recourse in cases where government does not meet whatever requirements are established. A public accountability framework is lacking in the federal DADM, and would be most welcome in Ontario.

The proposed public sector framework for Ontario is organized around three broad principles: No AI in secret; AI use Ontarians can trust; and AI that serves all Ontarians. These are good, if broad, principles. The real impact of this governance initiative will, of course, lie in its detail. However, it is encouraging to see a commitment to transparency, openness and public participation. It is also important that the government recognize the potential for AI to replicate or exacerbate existing inequities and to commit to addressing equity and inclusion.

My comments will address each of the principles in turn.

1. No AI in Secret

The consultation document states that “for people to trust that the use of AI is safe and appropriate they must first be aware that the AI exists. As a result, the government needs to be transparent about how, when, and why these tools are used so that people have a right to address potential biases created by the AI algorithms.” I agree. A public register of AI tools in use by government, along with access to details about these tools would be most welcome.

I do question, however, what is meant by “government” in this statement. In other words, I would be very interested to know more about the scope of what is being proposed. It was only a short while ago that we learned, for example, that police services in Ontario had made use of Clearview AI’s controversial facial recognition database. In some cases, it seems that senior ranks of the police may not even have been aware of this use. Ontario’s Privacy Commissioner at the time expressed concerns over this practice. This case raises important questions regarding the scope of the proposed commitment to transparency and AI. The first is whether police services will be included under government AI governance commitments – and if they are not, why not, and what measures will be put in place to govern AI used in the law enforcement context. It is also important to know what other agencies or departments will be excluded. A further question is whether AI-related commitments at the provincial level will be extended to municipalities, or whether they are intended only for use in the provincial public sector. Another question is whether the principles will only apply to AI developed within government or commissioned by government. In other words, will any law or guidance developed also apply to the myriad services that might be otherwise be available to government? For example, will new rules apply to the decision by a department to use the services of a human resources firm that makes use of AI in its recruitment processes? Will they apply to workplace monitoring software and productivity analytics services that might be introduced in the public service? On this latter point, I note it is unclear whether the commitment to AI governance relates only to AI that affects the general population as opposed to AI used to manage government employees. These issues of application and scope of any proposed governance framework are important.

2. Use Ontarian’s can Trust

The second guiding principle is “Use Ontarians can Trust”. The commitment is framed in these terms: “People building, procuring, and using AI have a responsibility to the people of Ontario that AI never puts people at risk and that proper guardrails are in place before the technology is used by the government.”

One of the challenges here is that there are so many types of AI and so many contexts in which AI can be used. Risk is inevitable -- and some of the risks may be of complex harms. In some cases, these harms may be difficult to foresee. The traffic predicting algorithm used as an illustration in this part of the consultation document has fairly clear-cut risk considerations. The main issue will be whether such an algorithm reduces the risk of serious accidents, for example. The risks from an algorithm that determines who is or is not eligible to receive social assistance benefits, on the other hand, will be much more complex. One significant risk will be that people who need the benefit will not receive it. Other risks might include the exacerbation of existing inequalities, or even greater alienation in the face of a seemingly impersonal system. These risks are serious but some are intangible – they might be ignored, dismissed or underestimated. Virginia Eubanks and others have observed that experimentation with the use of AI in government tends to take place in the context of programs and services for the least empowered members of society. This is troubling. The concept of risk must be robust and multifaceted. Decisions about where to deploy AI must be equitable and unbiased – not just the AI.

One of the initial recommendations in this section is to propose “ways to update Ontario’s rules, laws and guidance to strengthen the governance of AI, including whether to adopt a risk-based approach to determine when which rules apply.” I agree that work needs to be done to update Ontario’s legal frameworks in order to better address the challenges of AI. Data protection and human rights are two obvious areas where legislative reform may be necessary. It will also be important for those reforms to be accompanied by the necessary resources to handle the complex cases likely to be generated by AI. If legal protections and processes are enhanced without additional resources, the changes will be meaningless. It may also be necessary to consider establishing a regulatory authority for AI that could provide the governance, oversight and accountability specifically required by AI systems, and that could develop the necessary expertise. Challenging algorithmic decision-making will not be easy for ordinary Ontarians. They will need expert assistance and guidance for any challenge that goes beyond asking for an explanation or a reconsideration of the decision. A properly-resourced oversight body can provide this assistance and can develop necessary expertise to assist those who develop and implement AI.

3. AI that Serves all Ontarians

The overall goal for this commitment is to ensure that “Government use of AI reflects and protects the rights and values of Ontarians.” The values that are identified are equity and inclusion, as well as accountability.

As noted above, there is a tendency to deploy AI systems in ways that impact the most disadvantaged. AI systems are in use in the carceral context, they are used for the administration of social benefits programs, and so on. The very choices as to where to start experimenting with AI are ones that have significant impact. In these contexts, the risks of harm may be quite significant, but the populations impacted may feel most disempowered when it comes to challenging decisions or seeking recourse. This part of the consultation document suggests as a potential action the need to “Assess whether the government should prohibit the use of AI in certain use cases where vulnerable populations are at an extremely high risk.” While there likely are contexts in which a risk-based approach would warrant an early ban on AI until the risks can properly addressed, beyond bans, there should also be deliberation about how to use AI in contexts in which individuals are vulnerable. This might mean not rushing to experiment with AI in these areas until we have built a more robust accountability and oversight framework. It may also mean going slowly in certain areas – using only AI-assisted decision making, for example, and carefully studying and evaluating particular use cases.

 

In closing I would like to note as well the very thoughtful and thorough work being done by the Law Commission of Ontario on AI and Governance, which has a particular focus on the public sector. I hope that any policy development being done in this area will make good use of the Law Commission’s work.

Published in Privacy

A law suit filed in Montreal this summer raises novel copyright arguments regarding AI-generated works. The plaintiffs are artist Amel Chamandy and Galerie NuEdge Fine Arts (which sells and exhibits her art). They are suing artist Adam Basanta for copyright and trademark infringement. (The trademark infringement arguments are not discussed in this post). Mr Basanta is a world renowned new media artist who experiments with AI in his work. (See the Globe and Mail story by Chris Hannay on this law suit here).

According to a letter dated July 4, filed with the court, Mr. Basanta’s current project is “to explore connections between mass technologies, using those technologies themselves.” He explains his process in a video which can be found here. Essentially, he has created what he describes as an “art-factory” that randomly generates images without human input. The images created are then “analyzed by a series of deep-learning algorithms trained on a database of contemporary artworks in economic and institutional circulation” (see artist’s website). The images used in the database of artworks are found online. Where the analysis finds a match of more than 83% between one of the randomly generated images and an image in the database, the randomly generated image is presented online with the percentage match, the title of the painting it matches, and the artist’s name. This information is also tweeted out. The image of the painting that matches the AI image is not reproduced or displayed on the website or on Twitter.

One of Mr Basanta’s images was an 85.81% match with a painting by Ms Chamandy titled “Your World Without Paper”. This information was reported on Mr Basanta’s website and Twitter accounts along with the machine-generated image which resulted in the match.

The copyright infringement allegation is essentially that “the process used by the Defendant to compare his computer generated images to Amel Chamandy’s work necessarily required an unauthorized copy of such a work to be made.” (Statement of Claim, para 30). Ms Chamandy claims statutory damages of up to $20,000 for the commercial use of her work. Mr Basanta, for his part, argues that there is no display of Ms Chamandy’s work, and therefore no infringement.

AI has been generating much attention in the copyright world. AI algorithms need to be ‘trained’ and this training requires that they be fed a constant supply of text, data or images, depending upon the algorithm. Rights holders argue that the use of their works in this way without consent is infringement. The argument is that the process requires unauthorized copies to be fed into the system for algorithmic analysis. Debates have raged in the EU over a text-and-data mining exception to copyright infringement which would make this type of use of copyright protected works acceptable so long as it is for research purposes. Other uses would require clearance for a fee. There has already been considerable debate in Europe over whether research is a broad enough basis for the exception and what activities it would include. If a similar exception is to be adopted in Canada in the next round of copyright reform, we will face similar challenges in defining its boundaries.

Of course, the Chamandy case is not the conventional text and data mining situation. The copied image is not used to train algorithms. Rather, it is used in an analysis to assess similarities with another image. But such uses are not unknown in the AI world. Facial recognition technologies match live captured images with stored face prints. In this case, the third party artwork images are like the stored face prints. It is AI, just not the usual text and data mining paradigm. This should also raise questions about how to draft exceptions or to interpret existing exceptions to address AI-related creativity and innovation.

In the US, some argue that the ‘fair use’ exception to infringement is broad enough to support text and data mining uses of copyright protected works since the resulting AI output is transformative. Canada’s fair dealing provisions are less generous than U.S. fair use, but it is still possible to argue that text and data mining uses might be ‘fair’. Canadian law recognizes fair dealing for the purposes of research or private study, so if an activity qualifies as ‘research’ it might be fair dealing. The fairness of any dealing requires a contextual analysis. In this case the dealing might be considered fair since the end result only reports on similarities but does not reproduce any of the protected images for public view.

The problem, of course, with fair dealing defences is that each case turns on its own facts. The fact-dependent inquiry necessary for a fair dealing defense could be a major brake on innovation and creativity – either by dissuading uses out of fear of costly infringement claims or by driving up the cost of innovation by requiring rights clearance in order to avoid being sued.

The claim of statutory damages here is also interesting. Statutory damages were introduced in s. 38.1 of the Copyright Act to give plaintiffs an alternative to proving actual damage. For commercial infringements, statutory damages can range from $500 to $20,000 per work infringed; for non-commercial infringement the range is $100 to $5,000 for all infringements and all works involved. A judge’s actual award of damages within these ranges is guided by factors that include the need for deterrence, and the conduct of the parties. Ms Chamandy asserts that Mr Basanda’s infringement is commercial, even though the commercial dimension is difficult to see. It would be interesting to consider whether the enhancement of his reputation or profile as an artist or any increase in his ability to obtain grants would be considered “commercial”. Beyond the challenge of identifying what is commercial activity in this context, it opens a window into the potential impact of statutory damages in text and data mining activities. If such activities are considered to infringe copyright and are not clearly within an exception, then in Canada, a commercial text and data miner who consumes – say 500,000 different images to train an algorithm – might find themselves, even on the low end of the spectrum, liable for $250 million dollars in statutory damages. Admittedly, the Act contains a clause that gives a judge the discretion to reduce an award of statutory damages if it is “grossly out of proportion to the infringement”. However, not knowing what a court might do or by how much the damages might be reduced creates uncertainty that can place a chill on innovation.

Although in this case, there may well be a good fair dealing defence, the realities of AI would seem to require either a clear set of exceptions to clarify infringement issues, or some other scheme to compensate creators which expressly excludes resort to statutory damages. The vast number of works that might be consumed to train an algorithm for commercial purposes makes statutory damages, even at the low end of the scale, potentially devastating and creates a chill.

 

Published in Copyright Law
<< Start < Prev 1 2 Next > End >>
Page 2 of 2

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law