Teresa Scassa - Blog

Displaying items by tag: algorithmic bias

 

Note: The following is my response to the call for submissions on the recommendations following the third review of Canada’s Directive on Automated Decision-Making. Comments are due by June 30, 2022. If you are interested in commenting, please consult the Review Report and the Summary of Key Issues and Proposed Amendments. Comments can be sent to This e-mail address is being protected from spambots. You need JavaScript enabled to view it .

 

The federal Directive on Automated Decision-Making (DADM) and its accompanying Algorithmic Impact Assessment tool (AIA) are designed to provide governance for the adoption and deployment of automated decision systems (ADS) by Canada’s federal government. Governments are increasingly looking to ADS in order to speed up routine decision-making processes and to achieve greater consistency in decision-making. At the same time, there are reasons to be cautious. Automated decision systems carry risks of incorporating and replicating discriminatory bias. They may also lack the transparency required of government decision-making, particularly where important rights or interests are at stake. The DADM, which has been in effect since April 2019 (with compliance mandatory no later than April 2020), sets out a series of obligations related to the design and deployment of automated decision-making systems. The extent of the obligations depends upon a risk assessment, and the AIA is the tool by which the level of risk of the system is assessed.

Given that this is a rapidly evolving area, the DADM provides that it will be reviewed every six months. It is now in its third review. The first two reviews led to the clarification of certain obligations in the DADM and to the development of guidelines to aid in its interpretation. This third review proposes a number of more substantive changes. This note comments on some of these changes and proposes an issue for future consideration.

Clarify and Broaden the Scope

A key recommendation in this third round of review relates to the scope of the DADM. Currently, the DADM applies only to ‘external’ services of government – in other words services offered to individuals or organizations by government. It does not apply internally. This is a significant gap when one considers the expanding use of ADS in the employment context. AI-enabled decision systems have been used in hiring processes, and they can be used to conduct performance reviews, and to make or assist in decision-making about promotions and internal workforce mobility. The use of AI tools in the employment context can have significant impacts on the lives and careers of employees. It seems a glaring oversight to not include such systems in the governance regime for ADM. The review team has recommended expanding the scope of the DADM to include internal as well as external services. They note that this move would also extend the DADM to any ADS used for “grants and contributions, awards and recognition, and security screening” (Report at 11). This is an important recommendation and one which should be implemented.

The review team also recommends a clarification of the language regarding the application of the DADM. Currently it puts within its scope “any system, tool, or statistical models used to recommend or make an administrative decision about a client”. Noting that “recommend” could be construed as including only those systems that recommend a specific outcome, as opposed to systems that process information on behalf of a decision-maker, the team proposes replacing “recommend” with “support”. This too is an important recommendation which should be implemented.

Periodic Reviews

Currently the DADM provides for its review every six months. This was always an ambitious review schedule. No doubt it was motivated by the fact that the DADM was a novel tool designed to address a rapidly emerging and evolving technology with potentially significant implications. The idea was to ensure that it was working properly and to promptly address any issues or problems. In this third review, however, the team recommends changing the review period from six months to two years. The rationale is that the six-month timetable makes it challenging for the team overseeing the DADM (which is constantly in a review cycle), and makes it difficult to properly engage stakeholders. They also cite the need for the DADM to “display a degree of stability and reliability, enabling federal institutions and the clients they serve to plan and act with a reasonable degree of confidence.” (Report at 12).

This too is a reasonable recommendation. While more frequent reviews were important in the early days of the DADM and the AIA, reviews every six months seem unduly burdensome once initial hiccups are resolved. A six-month review cycle engages the team responsible for the DADM in a constant cycle of review, which may not be the best use of resources. The proposed two-year review cycle would allow for a more experience to be garnered with the DADM and AIA, enabling a more substantive assessment of issues arising. Further, a two-year window is much more realistic if stakeholders are to be engaged in a meaningful way. Being asked to comment on reports and proposed changes every six months seems burdensome for anyone – including an already stretched civil society sector. The review document suggests that Canada’s Chief Information Officer could request completion of an off-cycle review if the need arose, leaving room for the possibility that a more urgent issue could be addressed outside of the two-year review cycle.

Data Model and Governance

The third review also proposes amendments to provide for what it describes as a more ‘holistic’ approach to data governance. Currently, the DADM focuses on data inputs – in other words on assessing the quality, relevance and timeliness of the data used in the model. The review report recommends the addition of an obligation to establish “measures to ensure that data used and generated by the Automated Decision System are traceable, protected, and appropriately retained and disposed of in accordance with the Directive on Service and Digital, Directive on Privacy Practices, and Directive on Security Management”. It will also recommend amendments to extend testing and assessment beyond data to underlying models, in order to assess both data and algorithms for bias or other problems. These are positive amendments which should be implemented.

Explanation

The review report notes that while the DADM requires “meaningful explanations” of how automated decisions were reached, and while guidelines provide some detail as to what is meant by explainability, there is still uncertainty about what explainability entails. The Report recommends adding language in Appendix C, in relation to impact assessment, that will set out the information necessary for ‘explainability’. This includes:

  • The role of the system in the decision-making process;
  • The training and client data, their source and method of collection, if applicable;
  • The criteria used to evaluate client data and the operations applied to process it; and
  • The output produced by the system and any relevant information needed to interpret it in the context of the administrative decision.

Again, this recommendation should be implemented.

Reasons for Automation

The review would also require those developing ADM systems for government to specifically identify why it was considered necessary or appropriate to automate the existing decision-making process. The Report refers to a “clear and demonstrable need”. This is an important additional criterion as it requires transparency as to the reasons for automation – and that these reasons go beyond the fact that vendor-demonstrated technologies look really cool. As the authors of the review note, requiring justification also helps to assess the parameters of the system adopted – particularly if the necessity and proportionality approach favoured by the Office of the Privacy Commissioner of Canada is adopted.

Transparency

The report addresses several issues that are relevant to the transparency dimensions of the DADM and the accompanying AIA. Transparency is an important element of the DADM, and it is key both to the legitimacy of the adoption of ADS by government, but also to its ongoing use. Without transparency in government decision-making that impacts individuals, organizations and communities, there can be no legitimacy. There are a number of transparency elements that are built into the DADM. For example, there are requirements to provide notice of automated decision systems, a right to an explanation of decisions that is tailored to the impact of the decision, and a requirement not just to conduct an AIA, but to publish the results. The review report includes a number of recommendations to improve transparency. These include a recommendation to clarify when an AIA must be completed and released, greater transparency around peer review results, more explicit criteria for explainability, and adding additional questions to the AIA. These are all welcome recommendations.

At least one of these recommendations may go some way to allaying my concerns with the system as it currently stands. The documents accompanying the report (slide 3 of summary document) indicate that there are over 300 AI projects across 80% of federal institutions. However, at the time of writing, only four AIAs were published on the open government portal. There is clearly a substantial lag between development of these systems and release of the AIAs. The recommendation that an AIA be not just completed but also released prior to the production of the system is therefore of great importance to ensuring transparency.

It may be that some of the discrepancy in the numbers is attributable to the fact that the DADM came into effect in 2020, and it was not grandfathered in for projects already underway. For transparency’s sake, I would also recommend that a public register of ADS be created that contains basic information about all government ADS. This could include their existence and function, as well as some transparency regarding explainability, the reasons for adoption, and measures taken to review, assess and ensure the reliability of these systems. Although it is too late, in the case of these systems, to perform a proactive AIA, there should be some form of reporting tool that can be used to provide important information, for transparency purposes, to the public.

Consideration for the Future

The next review of the DADM and the AIA should also involve a qualitative assessment of the AIAs that have been published to date. If the AIA is to be a primary tool not just for assessing ADS but for providing transparency about them, then they need to be good. Currently there is a requirement to conduct an AIA for a system within the scope of the DADM – but there is no explicit requirement for it to be of a certain quality. A quick review of the four AIAs currently available online shows some discrepancy between them in terms of the quality of the assessment. For example, the project description for one such system is an unhelpful 9-word sentence that does not make clear how AI is actually part of the project. This is in contrast to another that describes the project in a 14-line paragraph. These are clearly highly divergent in terms of the level of clarity and detail provided.

The first of these two AIAs also seems to contain contradictory answers to the AIA questionnaire. For example, the answer to the question “Will the system only be used to assist a decision-maker” is ‘yes’. Yet the answer to the question “Will the system be replacing a decision that would otherwise be made by a human” is also ‘yes’. Either one of these answers is incorrect, or the answers do not capture how the respondent interpreted these questions. These are just a few examples. It is easy to see how use of the AIA tool can range from engaged to pro forma.

The obligations imposed on departments with respect to ADS vary depending upon the risk assessment score. This score is evaluated through the questionnaire, and one of the questions asks “Are clients in this line of business particularly vulnerable?” In the AIA for an access to information (ATIP) tool, the answer given to this question is “no”. Of course, the description of the tool is so brief that it is hard to get a sense of how it functions. However, I would think that the clientele for an ATIP portal would be quite diverse. Some users will be relatively sophisticated (e.g., journalists or corporate users). Others will be inexperienced. For some of these, information sought may be highly important to them as they may be seeking access to government information to right a perceived wrong, to find out more about a situation that adversely impacts them, and so on. In my view, this assessment of the vulnerability of the clients is not necessarily accurate. Yet the answer provided contributes to a lower overall score and thus a lower level of accountability. My recommendation for the next round of reviews is to assess the overall effectiveness of the AIA tool in terms of the information and answers provided and in terms of their overall accuracy.

I note that the review report recommends adding questions to the AIA in order to improve the tool. Quite a number of these are free text answers, which require responses to be drafted by the party completing the AIA. Proposed questions include ones relating to the user needs to be addressed, how the system will meet those needs, and the effectiveness of the system in meeting those needs, along with reasons for this assessment. Proposed questions will also ask whether non-AI-enabled solutions were also considered, and if so, why AI was chosen as the preferred method. A further question asks what the consequences would be of not deploying the system. This additional information is important both to assessing the tool and to providing transparency. However, as noted above, the answers will need to be clear and sufficiently detailed in order to be of any use.

The AIA is crucial to assessing the level of obligation and to ensuring transparency. If AIAs are pro forma or excessively laconic, then the DADM can be as finely tuned as can be, but it will still not achieve desired results. The review committee’s recommendation that plain language summaries of peer review assessments also be published will provide a means of assessing the quality of the AIAs, and thus it is an important recommendation to strengthen both transparency and compliance.

A final issue that I would like to address is that, to achieve transparency, people will need to be able to easily find and access the information about the systems. Currently, AIAs are published on the Open Government website. There, they are listed alphabetically by title. This is not a huge problem right now, since there are only four of them. As more are published, it would be helpful to have a means of organizing them by department or agency, or by other criteria (including risk/impact score) to improve their findability and usability. Further, it will be important that any peer review summaries are linked to the appropriate AIAs. In addition to publication on the open government portal, links to these documents should be made available from department, agency or program websites. It would also be important to have an index or registry of AI in the federal sector – including not just those projects covered by the DADM, but also those in production prior to the DADM’s coming into force.

[Note: I have written about the DADM and the AIA from an administrative law perspective. My paper, which looks at the extent to which the DADM addresses administrative law concerns regarding procedural fairness, can be found here.]

Published in Privacy

 

It has been quite a while since I posted to my blog. The reason has simply been a crushing workload that has kept me from writing anything that did not have an urgent deadline! In the meantime, so much has been going on in terms of digital and data law and policy in Canada and around the world. I will try to get back on track!

Artificial intelligence (AI) has been garnering a great deal of attention globally –for its potential to drive innovation, its capacity to solve urgent challenges, and its myriad applications across a broad range of sectors. In an article that is forthcoming in the Canadian Journal of Law and Technology, Bradley Henderson, Colleen Flood and I examine issues of algorithmic and data bias leading to discrimination in the healthcare context. AI technologies have tremendous potential across the healthcare system – AI innovation can improve workflows, enhance diagnostics, accelerate research and refine treatment. Yet at the same time, AI technologies bring with them many concerns, among them, bias and discrimination.

Bias can take many forms. In our paper, we focus on those manifestations of bias that can lead to discrimination of the kind recognized in human rights legislation and the Charter. Discrimination can arise either from flawed assumptions being coded into algorithms, from adaptive AI that makes its own correlations, or from unrepresentative data (or from a combination of these).

There are some significant challenges when it comes to the data used to train AI algorithms. Available data may reflect existing disparities and discrimination within the healthcare system. For example, some communities may be underrepresented in the data because of lack of adequate access to healthcare, or from a lack of trust in the healthcare system that tends to keep them away until health issues become acute. Lack of prescription drug coverage or access to paid sick leave may also impact when and how people access health care services. Racial or gender bias in terms of how symptoms or concerns are recorded or how illness is diagnosed can also affect the quality and representativeness of existing stores of data. AI applications developed and trained on data from US-based hospitals may reflect the socio-economic biases that impact access to health care in the US. It may also be questionable the extent to which they are generalizable to the Canadian population or sub-populations. In some cases, data about race or ethnicity may be important markers for understanding diseases and how they manifest themselves but these data may be lacking.

There are already efforts afoot to ensure better access to high quality health data for research and innovation in Canada, and our paper discusses some of these. Addressing data quality and data gaps is certainly one route to tackling bias and discrimination in AI. Our paper also looks at some of the legal and regulatory mechanisms available. On the legal front, we note that there are some recourses available where things go wrong, including human rights complaints, lawsuits for negligence, or even Charter challenges. However, litigating the harms caused by algorithms and data is likely to be complex, expensive, and fraught with difficulty. It is better by far to prevent harms than to push a system to improve itself after costly litigation. We consider the evolving regulatory landscape in Canada to see what approaches are emerging to avoid or mitigate harms. These include regulatory approaches for AI-enabled medical devices, and advanced therapeutic products. However, these systems focus on harms to human health, and would not apply to AI tools developed to improve access to healthcare, manage workflows, conduct risk assessments, and so on. There are regulatory gaps, and we discuss some of these. The paper also makes recommendations regarding improving access to better data for research and innovation, with the accompanying necessary enhancements to privacy laws and data governance regimes to ensure the protection of the public.

One of the proposals made in the paper is that bias and discrimination in healthcare-related AI applications should be treated as a safety issue, bringing a broader range of applications under Health Canada regulatory regimes. We also discuss lifecycle regulatory approaches (as opposed to one-off approvals), and providing warnings about data gaps and limitations. We also consider enhanced practitioner licensing and competency frameworks, requirements at the procurement stage, certification standards and audits. We call for law reform to human rights legislation which is currently not well-adapted to the AI context.

In many ways, this paper is just a preliminary piece. It lays out the landscape and identifies areas where there are legal and regulatory gaps and a need for both law reform and regulatory innovation. The paper is part of the newly launched Machine MD project at uOttawa, which is funded by the Canadian Institutes for Health Research, and that will run for the next four years.

The full pre-print text of the article can be found here.

Published in Privacy

On June 13, 2018 the Supreme Court of Canada handed down a decision that may have implications for how issues of bias in algorithmic decision-making in Canada will be dealt with. Ewert v. Canada is the result of an eighteen-year struggle by Mr. Ewert, a federal inmate and Métis man, to challenge the use of certain actuarial risk-assessment tools to make decisions about his carceral needs and about his risk of recidivism. His concerns, raised in his initial grievance in 2000, have been that these tools were “developed and tested on predominantly non-Indigenous populations and that there was no research confirming that they were valid when applied to Indigenous persons.” (at para 12) After his grievances went nowhere, he eventually sought a declaration in Federal Court that the tests breached his rights to equality and to due process under the Canadian Charter of Rights and Freedoms, and that they were also a breach of the Corrections and Conditional Release Act (CCRA), which requires the Correctional Service of Canada (CSC) to “take all reasonable steps to ensure that any information about an offender that it uses is as accurate, up to data and complete as possible.” (s. 24(1)). Although the Charter arguments were unsuccessful, the majority of the Supreme Court of Canada agreed with the trial judge that CSC had breached its obligations under the CCRA. Two justices in dissent agreed with the Federal Court of Appeal that neither the Charter nor the CCRA had been breached.

Although this is not explicitly a decision about ‘algorithmic decision-making’ as the term is used in the big data and artificial intelligence (AI) contexts, the basic elements are present. An assessment tool developed and tested using a significant volume of data is used to generate predictive data to aid in decision-making in individual cases. The case also highlights a common concern in the algorithmic decision-making context: that either the data used to develop and train the algorithm, or the assumptions coded into the algorithm, create biases that can lead to inaccurate predictions about individuals who fall outside the dominant group that has influenced the data and the assumptions.

As such, my analysis is not about the particular circumstances of Mr. Ewert, nor is it about the impact of the judgement within the correctional system in Canada. Instead, I parse the decision to see what it reveals about how courts might approach issues of bias in algorithmic decision-making, and what impact the decision may have in this emerging context.

1. ‘Information’ and ‘accuracy’

A central feature of the decision of the majority in Ewert is its interpretation of s. 24(1) of the CCRA. To repeat the wording of this section, it provides that “The Service shall take all reasonable steps to ensure that any information about an offender that it uses is as accurate, up to date and complete as possible.” [My emphasis] In order to conclude that this provision was breached, it was necessary for the majority to find that Mr. Ewert’s test results were “information” within the meaning of this section, and that the CCRA had not taken all reasonable steps to ensure its accuracy.

The dissenting justices took the view that when s. 24(1) referred to “information” and to the requirement to ensure its accuracy, the statute included only the kind of personal information collected from inmates, information about the offence committed, and a range of other information specified in s. 23 of the Act. The dissenting justices preferred the view of the CSC that “information” meant ““primary facts” and not “inferences or assessments drawn by the Service”” (at para 107). The majority disagreed. It found that when Parliament intended to refer to specific information in the CCRA it did so. When it used the term “information” in an unqualified way, as it did in s. 24(1), it had a much broader meaning. Thus, according to the majority, “the knowledge the CSC might derive from the impugned tools – for example, that an offender has a personality disorder or that there is a high risk than an offender will violently reoffend – is “information” about that offender” (at para 33). This interpretation of “information” is an important one. According to the majority, profiles and predictions applied to a person are “information” about that individual.

In this case, the Crown had argued that s. 24(1) should not apply to the predictive results of the assessment tools because it imposed an obligation to ensure that “information” is “as accurate” as possible. It argued that the term “accurate” was not appropriate to the predictive data generated by the tools. Rather, the tools “may have “different levels of predictive validity, in the sense that they predict poorly, moderately well or strongly””. (at para 43) The dissenting justices were clearly influenced by this argument, finding that: “a psychological test can be more or less valid or reliable, but it cannot properly be described as being “accurate” or “inaccurate”.” (at para 115) According to the dissent, all that was required was that accurate records of an inmate’s test scores must be maintained – not that the tests themselves must be accurate. The majority disagreed. In its view, the concept of accuracy could be adapted to different types of information. When applied to psychological assessment tools, “the CSC must take steps to ensure that it relies on test scores that predict risks strongly rather than those that do so poorly.” (at para 43)

It is worth noting that the Crown also argued that the assessment tools were important in decision-making because “the information derived from them is objective and thus mitigates against bias in subjective clinical assessments” (at para 41). While the underlying point is that the tools might produce more objective assessments than individual psychologists who might bring their own biases to an assessment process, the use of the term “objective” to describe the output is troubling. If the tools incorporate biases, or are not appropriately sensitive to cultural differences, then the output is ‘objective’ in only a very narrow sense of the word, and the use of the word masks underlying issues of bias. Interestingly, the majority took the view that if the tools are considered useful “because the information derived from them can be scientifically validated. . . this is all the more reason to conclude that s. 24(1) imposes an obligation on the CSC to take reasonable steps to ensure that the information is accurate.” (at para 41)

It should be noted that while this discussion all revolves around the particular wording of the CCRA, Principle 4.6 of Schedule I of the Personal Information Protection and Electronic Documents Act (PIPEDA) contains the obligation that: “Personal information shall be as accurate, complete, and up-to-date as is necessary for the purposes for which it is to be used.” Further, s. 6(2) of the Privacy Act provides that: “A government institution shall take all reasonable steps to ensure that personal information that is used for an administrative purpose by the institution is as accurate, up-to-date and complete as possible.A similar interpretation of “information” and “accuracy” in these statutes could be very helpful in addressing issues of bias in algorithmic decision-making more broadly.

2. Reasonable steps to ensure accuracy

According to the majority, “[t]he question is not whether the CSC relied on inaccurate information, but whether it took all reasonable steps to ensure that it did not.” (at para 47). This distinction is important – it means that Mr. Ewert did not have to show that his actual test scores were inaccurate, something that would be quite burdensome for him to do. According to the majority, “[s]howing that the CSC failed to take all reasonable steps in this respect may, as a practical matter, require showing that there was some reason for the CSC to doubt the accuracy of information in its possession about an offender.” (at para 47, my emphasis) The majority noted that the trial judge had found that “the CSC had long been aware of concerns regarding the possibility of psychological and actuarial tools exhibiting cultural bias.” (at para 49) The concerns had led to research being carried out in other jurisdictions about the validity of the tools when used to assess certain other cultural minority groups. The majority also noted that the CSC had carried out research “into the validity of certain actuarial tools other than the impugned tools when applied to Indigenous offenders” (at para 49) and that this research had led to those tools no longer being used. However, in this case, in spite of concerns, the CSC had taken no steps to assess the validity of the tools, and it continued to apply them to Indigenous offenders. The majority noted that the CCRA, which set out guiding principles in s. 4, specifically required correctional policies and practices to respect cultural, linguistic and other differences and to take into account “the special needs of women, aboriginal peoples, persons requiring mental health care and other groups” (s. 4(g)) The majority found that this principle “represents an acknowledgement of the systemic discrimination faced by Indigenous persons in the Canadian correctional system.” (at para 53) As a result, it found it incumbent on CSC to give “meaningful effect” to this principle “in performing all of its functions”. In particular, the majority found that “this provision requires the CSC to ensure that its practices, however neutral they may appear to be, do not discriminate against Indigenous persons.”(at para 54) The majority observed that although it has been 25 years since this principle was added to the legislation, “there is nothing to suggest that the situation has improved in the realm of corrections” (at para 60). It expressed dismay that “the gap between Indigenous and non-Indigenous offenders has continued to widen on nearly every indicator of correctional performance”. (at para 60) It noted that “Although many factors contributing to the broader issue of Indigenous over-incarceration and alienation from the criminal justice system are beyond the CSC’s control, there are many matters within its control that could mitigate these pressing societal problems. . . Taking reasonable steps to ensure that the CSC uses assessment tools that are free of cultural bias would be one.”(at para 61) [my emphasis]

According to the majority of the Court, therefore, what is required by s. 24(1) of the CCRA is for the CSC to carry out research into whether and to what extent the assessment tools it uses “are subject to cross-cultural variance when applied to Indigenous offenders.” (at para 67) Any further action would depend on the results of the research.

What is interesting here is that the onus is placed on the CSC (influenced by the guiding principles in the CCRA) to take positive steps to verify the validity of the assessment tools on which it relies. The Court does not specify who is meant to carry out the research in question, what standards it should meet, or how extensive it should be. These are important issues. It should be noted that discussions of algorithmic bias often consider solutions involving independent third-party assessment of the algorithms or the data used to develop them.

3. The Charter arguments

Two Charter arguments were raised by counsel for Mr. Ewert. The first was a s. 7 due process argument. Counsel for Mr. Ewert argued that reliance on the assessment tools violated his right to liberty and security of the person in a manner that was not in accordance with the principles of fundamental justice. The tools were argued to fall short of the principles of fundamental justice because of their arbitrariness (lacking any rational connection to the government objective) and overbreadth. The court was unanimous in finding that reliance on the tools was not arbitrary, stating that “The finding that there is uncertainty about the extent to which the tests are accurate when applied to Indigenous offenders is not sufficient to establish that there is no rational connection between reliance on the tests and the relevant government objective.” (at para 73) Without further research, the extent and impact of any cultural bias could not be known.

Mr. Ewert also argued that the results of the use of the tools infringed his right to equality under s. 15 of the Charter. The Court gave little time or attention to this argument, finding that there was not enough evidence to show that the tools had a disproportionate impact on Indigenous inmates when compared to non-Indigenous inmates.

The Charter is part of the Constitution and applies only to government action. There are many instances in which governments may come to rely upon algorithmic decision-making. While concerns might be raised about bias and discriminatory impacts from these processes, this case demonstrates the challenge faced by those who would raise such arguments. The decision in Ewert suggests that in order to establish discrimination, it will be necessary either to demonstrate discriminatory impacts or effects, or to show how the algorithm itself and/or the data used to develop it incorporate biases or discriminatory assumptions. Establishing any of these things will impose a significant evidentiary burden on the party raising the issue of discrimination. Even where the Charter does not apply and individuals must rely upon human rights legislation, establishing discrimination with complex (and likely inaccessible or non-transparent algorithms and data) will be highly burdensome.

Concluding thoughts

This case raises important and interesting issues that are relevant in algorithmic decision-making of all kinds. The result obtained in this case favoured Mr. Ewert, but it should be noted that it took him 18 years to achieve this result, and he required the assistance of a dedicated team of lawyers. There is clearly much work to do to ensure that fairness and transparency in algorithmic decision-making is accessible and realizable.

Mr. Ewert’s success was ultimately based, not upon human rights legislation or the Charter, but upon federal legislation which required the keeping of accurate information. As noted above, PIPEDA and the Privacy Act impose a similar requirement on organizations that collect, use or disclose personal information to ensure the accuracy of that information. Using the interpretive approach of the Supreme Court of Canada in Ewert v. Canada, this statutory language may provide a basis for supporting a broader right to fair and unbiased algorithmic decision-making. Yet, as this case also demonstrates, it may be challenging for those who feel they are adversely impacted to make their case, absent evidence of long-standing and widespread concerns about particular tests in specific contexts.

 

Published in Privacy

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law