Teresa Scassa - Blog

Tuesday, 19 March 2024 09:41

AI, Human Rights, and Canada's Proposed AI and Data Act

Written by  Teresa Scassa
Rate this item
(5 votes)

Artificial intelligence technologies have significant potential to impact human rights. Because of this, emerging AI laws make explicit reference to human rights. Already-deployed AI systems are raising human rights concerns – including bias and discrimination in hiring, healthcare, and other contexts; disruptions of democracy; enhanced surveillance; and hateful deepfake attacks. Well-documented human rights impacts also flow from the use of AI technologies by law enforcement and the state, and from the use of AI in armed conflicts.

Governments are aware that human rights issues with AI technologies must be addressed. Internationally, this is evident in declarations by the G7, UNESCO, and the OECD. It is also clear in emerging national and supranational regulatory approaches. For example, human rights are tackled in the EU AI Act, which not only establishes certain human-rights-based no-go zones for AI technologies, but also addresses discriminatory bias. The US’s NIST AI Risk Management Framework (a standard, not a law – but influential nonetheless) also addresses the identification and mitigation of discriminatory bias.

Canada’s Artificial Intelligence and Data Act (AIDA), proposed by the Minister of Industry, Science and Economic Development (ISED) is currently at the committee stage as part of Bill C-27. The Bill’s preamble states that “Parliament recognizes that artificial intelligence systems and other emerging technologies should uphold Canadian norms and values in line with the principles of international human rights law”. In its substantive provisions, AIDA addresses “biased output”, which it defines in terms of the prohibited grounds of discrimination in the Canadian Human Rights Act. AIDA imposes obligations on certain actors to assess and mitigate the risks of biased output in AI systems. The inclusion of these human rights elements in AIDA is positive, but they are also worth a closer look.

Risk Regulation and Human Rights

Requiring developers to take human rights into account in the design and development of AI systems is important, and certainly many private sector organizations already take seriously the problems of bias and the need to identify and mitigate it. After all, biased AI systems will be unable to perform properly, and may expose their developers to reputational harm and possibly legal action. However, such attention has not been universal, and has been addressed with different degrees of commitment. Legislated requirements are thus necessary, and AIDA will provide these. AIDA creates obligations to identify and mitigate potential harms at the design and development stage, and there are additional documentation and some transparency requirements. The enforcement of AIDA obligations can come through audits conducted or ordered by the new AI and Data Commissioner, and there is also the potential to use administrative monetary penalties to punish non-compliance, although what this scheme will look like will depend very much on as-yet-to-be-developed regulations. AIDA, however, has some important limitations when it comes to human rights.

Selective Approach to Human Rights

Although AIDA creates obligations around biased output, it does not address human rights beyond the right to be free from discrimination. Unlike the EU AI Act, for example, there are no prohibited practices related to the use of AI in certain forms of surveillance. A revised Article 5 of the EU AI Act will prohibit real-time biometric surveillance by law enforcement agencies in publicly accessible spaces, subject to carefully-limited exceptions. The untargeted scraping of facial images for the building or expansion of facial recognition databases (as occurred with Clearview AI) is also prohibited. Emotion recognition technologies are banned in some contexts, as are some forms of predictive policing. Some applications that are not outright prohibited, are categorized as high risk and have limits imposed on the scope of their use. These “no-go zones” reflect concerns over a much broader range of human rights and civil liberties than what we see reflected in Canada’s AIDA. It is small comfort to say that the Canadian Charter of Rights and Freedoms remains as a backstop against government excess in the use of AI tools for surveillance or policing; ex ante AI regulation is meant to head off problems before they become manifest. No-go zones reflect limits on what society is prepared to tolerate; AIDA sets no such limits. Constitutional litigation is expensive, time-consuming and uncertain in outcome (just look at the 5-4 splint in the recent R. v. Bykovets decision of the Supreme Court of Canada). Further, the application of AIDA to the military and intelligence services is expressly excluded from AIDA’s scope (as is the application of the law to the federal public service).

Privacy is an important human right, and privacy rights are not part of the scope of AIDA. The initial response is that such rights are dealt with under privacy legislation for public and private sectors and at federal, provincial and territorial levels. However, such privacy statutes deal principally with data protection (in other words, they govern the collection, use and disclosure of personal information). AIDA could have addressed surveillance more directly. After all, the EU has top of its class data protection laws, but still places limits on the use of AI systems for certain types of surveillance activities. Second, privacy laws in Canada (and there are many of them) are, apart from Quebec’s, largely in a state of neglect and disrepair. Privacy commissioners at federal, provincial, and territorial levels have been issuing guidance as to how they see their laws applying in the AI context, and findings and rulings in privacy complaints involving AI systems are starting to emerge. The commissioners are thoughtfully adapting existing laws to new circumstances, but there is no question that there is need for legislative reform. In issuing its recent guidance on Facial Recognition and Mugshot Databases, the Office of the Information and Privacy Commissioner of Ontario specifically identified the need to issue the guidance in the face of legislative gaps and inaction that “if left unaddressed, risk serious harms to individuals’ right to privacy and other fundamental human rights.”

Along with AIDA, Bill C-27 contains the Consumer Privacy Protection Act (CPPA) which will reform Canada’s private sector data protection law, the Personal Information Protection and Electronic Documents Act (PIPEDA). However, the CPPA has only one AI-specific amendment – a somewhat tepid right to an explanation of automated decision-making. It does not address the data scraping issue at the heart of the Clearview AI investigation, for example (where the core findings of the Commissioner remain disputed by the investigated company) and which prompted the articulation of a no-go zone for data-scraping for certain purposes in the EU AI Act.

High Impact AI and Human Rights

AIDA will apply only to “high impact” AI systems. Among other things, such systems can adversely impact human rights. While the original version of AIDA in Bill C-27 left the definition of “high impact” entirely to regulations (generating considerable and deserved criticism), the Minister of ISED has since proposed amendments to C-27 that set out a list of categories of “high impact” AI systems. While this list at least provides some insight into what the government is thinking, it creates new problems as well. This list identifies several areas in which AI systems could have significant impacts on individuals, including in healthcare and in some court or tribunal proceedings. Also included on the list is the use of AI in all stages of the employment context, and the use of AI in making decisions about who is eligible for services and at what price. Left off the list, however, is where AI systems are (already) used to determine who is selected as a tenant for rental accommodation. Such tools have extremely high impact. Yet, since residential tenancies are interests in land, and not services, they are simply not captured by the current “high impact” categories. This is surely an oversight – yet it is one that highlights the rather slap-dash construction of the AIDA and its proposed amendments. As a further example, a high-impact category addressing the use of biometrics to assess an individual’s behaviour or state of mind could be interpreted to capture affect recognition systems or the analysis of social media communications, but this is less clear than it should be. It also raises the question as to whether the best approach, from a human rights perspective, is to regulate such systems as high impact or whether limits need to be placed on their use and deployment.

Of course, a key problem is that this bill is housed within ISED. This is not a bill centrally developed that takes a broader approach to the federal government and its powers. Under AIDA, medical devices are excluded from the category of “high impact” uses of AI in the healthcare context because it is Health Canada that will regulate AI-enabled medical devices, and ISED must avoid treading on its toes. Perhaps ISED also seeks to avoid encroaching on the mandates of the Minister of Justice, or the Minister of Public Safety. This may help explain some of the crabbed and clunky framing of AIDA compared to the EU AI Act. It does, however, raise the question of why Canada chose this route – adopting a purportedly comprehensive risk-management framework housed under the constrained authority of the Minister of ISED.

Such an approach is inherently flawed. As discussed above, AIDA is limited in the human rights it is prepared to address, and it raises concerns about how human rights will be both interpreted and framed. On the interpretation side of things, the incorporation of the Canadian Human Rights Act’s definition of discrimination in AIDA combined with ISED’s power to interpret and apply the proposed law will give ISED interpretive authority over the definition of discrimination without the accompanying expertise of the Canadian Human Rights Commission. Further, it is not clear that ISED is a place for expansive interpretations of human rights; human rights are not a core part of its mandate – although fostering innovation is.

All of this should leave Canadians with some legitimate concerns. AIDA may well be passed into law – and it may prove to be useful in the better governance of AI. But when it comes to human rights, it has very real limitations. AIDA cannot be allowed to end the conversation around human rights and AI at the federal level – nor at the provincial level either. Much work remains to be done.

Login to post comments

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law