Teresa Scassa - Blog

Displaying items by tag: AI bias

Artificial intelligence technologies have significant potential to impact human rights. Because of this, emerging AI laws make explicit reference to human rights. Already-deployed AI systems are raising human rights concerns – including bias and discrimination in hiring, healthcare, and other contexts; disruptions of democracy; enhanced surveillance; and hateful deepfake attacks. Well-documented human rights impacts also flow from the use of AI technologies by law enforcement and the state, and from the use of AI in armed conflicts.

Governments are aware that human rights issues with AI technologies must be addressed. Internationally, this is evident in declarations by the G7, UNESCO, and the OECD. It is also clear in emerging national and supranational regulatory approaches. For example, human rights are tackled in the EU AI Act, which not only establishes certain human-rights-based no-go zones for AI technologies, but also addresses discriminatory bias. The US’s NIST AI Risk Management Framework (a standard, not a law – but influential nonetheless) also addresses the identification and mitigation of discriminatory bias.

Canada’s Artificial Intelligence and Data Act (AIDA), proposed by the Minister of Industry, Science and Economic Development (ISED) is currently at the committee stage as part of Bill C-27. The Bill’s preamble states that “Parliament recognizes that artificial intelligence systems and other emerging technologies should uphold Canadian norms and values in line with the principles of international human rights law”. In its substantive provisions, AIDA addresses “biased output”, which it defines in terms of the prohibited grounds of discrimination in the Canadian Human Rights Act. AIDA imposes obligations on certain actors to assess and mitigate the risks of biased output in AI systems. The inclusion of these human rights elements in AIDA is positive, but they are also worth a closer look.

Risk Regulation and Human Rights

Requiring developers to take human rights into account in the design and development of AI systems is important, and certainly many private sector organizations already take seriously the problems of bias and the need to identify and mitigate it. After all, biased AI systems will be unable to perform properly, and may expose their developers to reputational harm and possibly legal action. However, such attention has not been universal, and has been addressed with different degrees of commitment. Legislated requirements are thus necessary, and AIDA will provide these. AIDA creates obligations to identify and mitigate potential harms at the design and development stage, and there are additional documentation and some transparency requirements. The enforcement of AIDA obligations can come through audits conducted or ordered by the new AI and Data Commissioner, and there is also the potential to use administrative monetary penalties to punish non-compliance, although what this scheme will look like will depend very much on as-yet-to-be-developed regulations. AIDA, however, has some important limitations when it comes to human rights.

Selective Approach to Human Rights

Although AIDA creates obligations around biased output, it does not address human rights beyond the right to be free from discrimination. Unlike the EU AI Act, for example, there are no prohibited practices related to the use of AI in certain forms of surveillance. A revised Article 5 of the EU AI Act will prohibit real-time biometric surveillance by law enforcement agencies in publicly accessible spaces, subject to carefully-limited exceptions. The untargeted scraping of facial images for the building or expansion of facial recognition databases (as occurred with Clearview AI) is also prohibited. Emotion recognition technologies are banned in some contexts, as are some forms of predictive policing. Some applications that are not outright prohibited, are categorized as high risk and have limits imposed on the scope of their use. These “no-go zones” reflect concerns over a much broader range of human rights and civil liberties than what we see reflected in Canada’s AIDA. It is small comfort to say that the Canadian Charter of Rights and Freedoms remains as a backstop against government excess in the use of AI tools for surveillance or policing; ex ante AI regulation is meant to head off problems before they become manifest. No-go zones reflect limits on what society is prepared to tolerate; AIDA sets no such limits. Constitutional litigation is expensive, time-consuming and uncertain in outcome (just look at the 5-4 splint in the recent R. v. Bykovets decision of the Supreme Court of Canada). Further, the application of AIDA to the military and intelligence services is expressly excluded from AIDA’s scope (as is the application of the law to the federal public service).

Privacy is an important human right, and privacy rights are not part of the scope of AIDA. The initial response is that such rights are dealt with under privacy legislation for public and private sectors and at federal, provincial and territorial levels. However, such privacy statutes deal principally with data protection (in other words, they govern the collection, use and disclosure of personal information). AIDA could have addressed surveillance more directly. After all, the EU has top of its class data protection laws, but still places limits on the use of AI systems for certain types of surveillance activities. Second, privacy laws in Canada (and there are many of them) are, apart from Quebec’s, largely in a state of neglect and disrepair. Privacy commissioners at federal, provincial, and territorial levels have been issuing guidance as to how they see their laws applying in the AI context, and findings and rulings in privacy complaints involving AI systems are starting to emerge. The commissioners are thoughtfully adapting existing laws to new circumstances, but there is no question that there is need for legislative reform. In issuing its recent guidance on Facial Recognition and Mugshot Databases, the Office of the Information and Privacy Commissioner of Ontario specifically identified the need to issue the guidance in the face of legislative gaps and inaction that “if left unaddressed, risk serious harms to individuals’ right to privacy and other fundamental human rights.”

Along with AIDA, Bill C-27 contains the Consumer Privacy Protection Act (CPPA) which will reform Canada’s private sector data protection law, the Personal Information Protection and Electronic Documents Act (PIPEDA). However, the CPPA has only one AI-specific amendment – a somewhat tepid right to an explanation of automated decision-making. It does not address the data scraping issue at the heart of the Clearview AI investigation, for example (where the core findings of the Commissioner remain disputed by the investigated company) and which prompted the articulation of a no-go zone for data-scraping for certain purposes in the EU AI Act.

High Impact AI and Human Rights

AIDA will apply only to “high impact” AI systems. Among other things, such systems can adversely impact human rights. While the original version of AIDA in Bill C-27 left the definition of “high impact” entirely to regulations (generating considerable and deserved criticism), the Minister of ISED has since proposed amendments to C-27 that set out a list of categories of “high impact” AI systems. While this list at least provides some insight into what the government is thinking, it creates new problems as well. This list identifies several areas in which AI systems could have significant impacts on individuals, including in healthcare and in some court or tribunal proceedings. Also included on the list is the use of AI in all stages of the employment context, and the use of AI in making decisions about who is eligible for services and at what price. Left off the list, however, is where AI systems are (already) used to determine who is selected as a tenant for rental accommodation. Such tools have extremely high impact. Yet, since residential tenancies are interests in land, and not services, they are simply not captured by the current “high impact” categories. This is surely an oversight – yet it is one that highlights the rather slap-dash construction of the AIDA and its proposed amendments. As a further example, a high-impact category addressing the use of biometrics to assess an individual’s behaviour or state of mind could be interpreted to capture affect recognition systems or the analysis of social media communications, but this is less clear than it should be. It also raises the question as to whether the best approach, from a human rights perspective, is to regulate such systems as high impact or whether limits need to be placed on their use and deployment.

Of course, a key problem is that this bill is housed within ISED. This is not a bill centrally developed that takes a broader approach to the federal government and its powers. Under AIDA, medical devices are excluded from the category of “high impact” uses of AI in the healthcare context because it is Health Canada that will regulate AI-enabled medical devices, and ISED must avoid treading on its toes. Perhaps ISED also seeks to avoid encroaching on the mandates of the Minister of Justice, or the Minister of Public Safety. This may help explain some of the crabbed and clunky framing of AIDA compared to the EU AI Act. It does, however, raise the question of why Canada chose this route – adopting a purportedly comprehensive risk-management framework housed under the constrained authority of the Minister of ISED.

Such an approach is inherently flawed. As discussed above, AIDA is limited in the human rights it is prepared to address, and it raises concerns about how human rights will be both interpreted and framed. On the interpretation side of things, the incorporation of the Canadian Human Rights Act’s definition of discrimination in AIDA combined with ISED’s power to interpret and apply the proposed law will give ISED interpretive authority over the definition of discrimination without the accompanying expertise of the Canadian Human Rights Commission. Further, it is not clear that ISED is a place for expansive interpretations of human rights; human rights are not a core part of its mandate – although fostering innovation is.

All of this should leave Canadians with some legitimate concerns. AIDA may well be passed into law – and it may prove to be useful in the better governance of AI. But when it comes to human rights, it has very real limitations. AIDA cannot be allowed to end the conversation around human rights and AI at the federal level – nor at the provincial level either. Much work remains to be done.

Published in Privacy

This is the third in my series of posts on the Artificial Intelligence and Data Act (AIDA) found in Bill C-27, which is part of a longer series on Bill C-27 generally. Earlier posts on the AIDA have considered its purpose and application, and regulated activities. This post looks at the harms that the AIDA is designed to address.

The proposed Artificial Intelligence and Data Act (AIDA), which is the third part of Bill C-27, sets out to regulate ‘high-impact’ AI systems. The concept of ‘harm’ is clearly important to this framework. Section 4(b) of the AIDA states that a purpose of the legislation is “to prohibit certain conduct in relation to artificial intelligence systems that may result in serious harm to individuals or harm to their interests”.

Under the AIDA, persons responsible for high-impact AI systems have an obligation to identify, assess, and mitigate risks of harm or biased output (s. 8). Those persons must also notify the Minister “as soon as feasible” if a system for which they are responsible “results or is likely to result in material harm”. There are also a number of oversight and enforcement functions that are triggered by harm or a risk of harm. For example, if the Minister has reasonable grounds to believe that a system may result in harm or biased output, he can demand the production of certain records (s. 14). If there is a serious risk of imminent harm, the Minister may order a person responsible to cease using a high impact system (s. 17). The Minister is also empowered to make public certain information about a system where he believes that there is a serious risk of imminent harm and the publication of the information is essential to preventing it (s. 28). Elevated levels of harm are also a trigger for the offence in s. 39, which involves “knowing or being reckless as to whether the use of an artificial intelligence system is likely to cause serious physical or psychological harm to an individual or substantial damage to an individual’s property”.

‘Harm’ is defined in s. 5(1) to mean:

(a) physical or psychological harm to an individual;

(b) damage to an individual’s property; or

(c) economic loss to an individual.

I have emphasized the term “individual” in this definition because it places an important limit on the scope of the AIDA. First, it is unlikely that the term ‘individual’ includes a corporation. Typically, the word ‘person’ is considered to include corporations, and the word ‘person’ is used in this sense in the AIDA. This suggests that “individual” is meant to have a different meaning. The federal Interpretation Act is silent on the issue. It is a fair interpretation of the definition of ‘harm’ that “individual” is not the same as “person”, and means an individual (human) person. The French version uses the term “individu”, and not “personne”. The harms contemplated by this legislation are therefore to individuals and not to corporations.

Defining harm in terms of individuals has other ramifications. The AIDA defines high-risk AI systems in terms of their impacts on individuals. Importantly, this excludes groups and communities. It also very significantly focuses on what are typically considered quantifiable harms, and uses language that suggests quantifiability (economic loss, damage to property, physical or psychological harm). Some important harms may be difficult to establish or to quantify. For example, class action lawsuits relating to significant data breaches have begun to wash up on the beach of lost causes due to the impossibility of proving material loss either because, although thousands may have been impacted, the individual losses are impossible to quantify, or because it is impossible to prove a causal link between very real identity theft and that particular data breach. Consider an AI system that manipulates public opinion through an algorithm that drives content to individuals based on its shock value rather than its truth. Say this happens during a pandemic and it convinces people that they should not get vaccinated or take other recommended public health measures. Say some people die because they were misled in this way. Say other people die because they were exposed to infected people who were misled in this way. How does one prove the causal link between the physical harm of injury or death of an individual and the algorithm? What if there is an algorithm that manipulates voter sentiment in a way that changes the outcome of an election? What is the quantifiable economic loss or psychological harm to any individual? How could causation be demonstrated? The harm, once again, is collective.

The EU AI Act has also been criticized for focusing on individual harm, but the wording of that law is still broader than that in the AIDA. The EU AI Act refers to high-risk systems in terms of “harm to the health and safety or a risk of adverse impact on fundamental rights of persons”. This at least introduces a more collective dimension, and it avoids the emphasis on quantifiability.

The federal government’s own Directive on Automated Decision-Making (DADM) which is meant to guide the development of AI used in public sector automated decision systems (ADS) also takes a broader approach to impact. In assessing the potential impact of an ADS, the DADM takes into account: “the rights of individuals or communities”, “the health or well-being of individuals or communities”, “the economic interests of individuals, entities, or communities”, and “the ongoing sustainability of an ecosystem”.

With its excessive focus on individuals, the AIDA is simply tone deaf to the growing global understanding of collective harm caused by the use of human-derived data in AI systems.

One response of the government might be to point out that the AIDA is also meant to apply to “biased output”. Biased output is defined in the AIDA as:

content that is generated, or a decision, recommendation or prediction that is made, by an artificial intelligence system and that adversely differentiates, directly or indirectly and without justification, in relation to an individual on one or more of the prohibited grounds of discrimination set out in section 3 of the Canadian Human Rights Act, or on a combination of such prohibited grounds. It does not include content, or a decision, recommendation or prediction, the purpose and effect of which are to prevent disadvantages that are likely to be suffered by, or to eliminate or reduce disadvantages that are suffered by, any group of individuals when those disadvantages would be based on or related to the prohibited grounds. (s. 5(1)) [my emphasis]

The argument here will be that the AIDA will also capture discriminatory biases in AI. However, I have underlined the part of this definition that once again returns the focus to individuals, rather than groups. It can be very hard for an individual to demonstrate that a particular decision discriminated against them (especially if the algorithm is obscure). In any event, biased AI will tend to replicate systemic discrimination. Although it will affect individuals, it is the collective impact that is most significant – and this should be recognized in the law. The somewhat obsessive focus on individual harm in the AIDA may unwittingly help perpetuate denials of systemic discrimination.

It is also important to note that the definition of “harm” does not include “biased output”, and while the terms are used in conjunction in some cases (for example, in s. 8’s requirement to “identify, assess and mitigate the risks of harm or biased output”), other obligations relate only to “harm”. Since the two are used conjunctively in some parts of the statute, but not others, a judge interpreting the statute might presume that when only one of the terms is used, then it is only that term that is intended. Section 17 of the AIDA allows the Minister to order a person responsible for a high-impact system to cease using it or making it available if there is a “serious risk of imminent harm”. Section 28 permits the Minister to order the publication of information related to an AI system where there are reasonable grounds to believe that the use of the system gives rise to “a serious risk of imminent harm”. In both cases, the defined term ‘harm’ is used, but not ‘biased output’.

The goals of the AIDA to protect against harmful AI are both necessary and important, but in articulating the harm that it is meant to address, the Bill underperforms.

Published in Privacy

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law