Tuesday, 11 October 2022 03:43

Regulating AI in Canada - The Federal Government and the AIDA

Written by  Teresa Scassa
Rate this item
(3 votes)

This post is the fifth in a series on Canada’s proposed Artificial Intelligence and Data Act in Bill C-27. It considers the federal government’s constitutional authority to enact this law, along with other roles it might have played in regulating AI in Canada. Earlier posts include ones on the purpose and application of the AIDA; regulated activities; the narrow scope of the concepts of harm and bias in the AIDA and oversight and protection.

AI is a transformative technology that has the power to do amazing things, but which also has the potential to cause considerable harm. There is a global clamour to regulate AI in order to mitigate potential negative effects. At the same time, AI is seen as a driver of innovation and economies. Canada’s federal government wants to support and nurture Canada’s thriving AI sector while at the same time ensuring that there is public trust in AI. Facing similar issues, the EU introduced a draft AI Act, which is currently undergoing public debate and discussion (and which itself was the product of considerable consultation). The US government has just proposed its Blueprint for an AI Bill of Rights, and has been developing policy frameworks for AI, including the National Institute of Standards and Technology (NIST) Risk Management Framework. The EU and the US approaches are markedly different. Interestingly, in the US (which, like Canada, is a federal state) there has been considerable activity at the state level on AI regulation. Serious questions for Canada include what to do about AI, how best to do it – and who should do it.

In June 2022, the federal government introduced the proposed Artificial Intelligence and Data Act (AIDA) in Bill C-27. The AIDA takes the form of risk regulation; in other words, it is meant to anticipate and mitigate AI harms to the public. This is an ex ante approach; it is intended to address issues before they become problems. The AIDA does not provide personal remedies or recourses if anyone is harmed by AI – this is left for ex post regimes (ones that apply after harm has occurred). These will include existing recourses such as tort law (extracontractual civil liability in Quebec), and complaints to privacy, human rights or competition commissioners.

I have addressed some of the many problems I see with the AIDA in earlier posts. Here, I try to unpack issues around the federal government’s constitutional authority to enact this bill. It is not so much that they lack jurisdiction (although they might); rather, how they understand their jurisdiction can shape the nature and substance of the bill they are proposing. Further, the federal government has acted without any consultation on the AIDA prior to its surprising insertion in Bill C-27. Although it promises consultation on the regulations that will follow, this does not make up for the lack of discussion around how we should identify and address the risks posed by AI. This rushed bill is also shaped by constitutional constraints – it is AI regulation with structural limitations that have not been explored or made explicit.

Canada is a federal state, which means that the powers typically exercised by a nation state are divided between a federal and regional governments. In theory, federalism allows for regional differences to thrive within an overarching framework. However, some digital technology issues (including data protection and AI) fit uneasily within Canada’s constitutional framework. In proposing the Consumer Privacy Protection Act part of Bill C-27, for example, the federal government appears to believe that it does not have the jurisdiction to address data protection as a matter of human rights – this belief has impacted the substance of the bill.

In Canada, the federal government has jurisdiction over criminal law, trade and commerce, banking, navigation and shipping, as well as other areas where it makes more sense to have one set of rules than to have ten. The cross-cutting nature of AI, the international competition to define the rules of the game, and the federal government’s desire to take a consistent national approach to its regulation are all factors that motivated the inclusion of the AIDA in Bill C-27. The Bill’s preamble states that “the design, development and deployment of artificial intelligence systems across provincial and international borders should be consistent with national and international standards to protect individuals from potential harm”. Since we do not yet have national or international standards, the law will also enable the creation (and imposition) of standards through regulation.

The preamble’s reference to the crossing of borders signals both that the federal government is keenly aware of its constitutional limitations in this area and that it intends to base its jurisdiction on the interprovincial and international dimensions of AI. The other elements of Bill C-27 rely on the federal general trade and commerce power – this follows the approach taken in the Personal Information Protection and Electronic Documents Act (PIPEDA), which is reformed by the first two parts of C-27. There are indications that trade and commerce is also relevant to the AIDA. Section 4 of the AIDA refers to the goal of regulating “international and interprovincial trade and commerce in artificial intelligence systems by establishing common requirements applicable across Canada, for the design, development and use of those systems.” Yet the general trade and commerce power is an uneasy fit for the AIDA. The Supreme Court of Canada has laid down rules for the exercise of this power, and one of these is that it should not be used to regulate a single industry; a legislative scheme should regulate trade as a whole.

The Minister of Industry, in discussing Canada’s AI strategy has stated:

Artificial intelligence is a key part of our government’s plan to make our economy stronger than ever. The second phase of the Pan-Canadian Artificial Intelligence Strategy will help harness the full potential of AI to benefit Canadians and accelerate trustworthy technology development, while fostering diversity and cooperation across the AI domain. This collaborative effort will bring together the knowledge and expertise necessary to solidify Canada as a global leader in artificial intelligence and machine learning.

Clearly, the Minister is casting the role of AI as an overall economic transformer rather than a discrete industry. Nevertheless, although it might be argued that AI is a technology that cuts across all sectors of the economy, the AIDA applies predominantly to its design and development stages, which makes it look as if it targets a particular industry. Further, although PIPEDA (and the CPPA in the first Part of Bill C-27), are linked to trade and commerce through the transactional exchange of personal data – typically when it is collected from individuals in the course of commercial activity – the AIDA is different. Its regulatory requirements are meant to apply before any commercial activity takes place –at the design and development stage. This is worth pausing over because design and development stages may be non-commercial (in university-based research, for example) or may be purely intra-provincial. As a result, the need to comply with a law at the design and development stage, when that law is premised on interprovincial or international commercial activity, may only be discovered well after commercialization becomes a reality.

Arguably, AI might also be considered a matter of ‘national concern’ under the federal government’s residual peace, order and good government power. Matters of national concern that would fall under this power would be ones that did not exist at the time of confederation. The problem with addressing AI in this way is that it is simply not obvious that provinces could not enact legislation to govern AI – as many states have begun to do in the US.

Another possible constitutional basis is the federal criminal law power. This is used, for example, in the regulation of certain matters relating to health such as tobacco, food and drugs, medical devices and controlled substances. The Supreme Court of Canada has ruled that this power “is broad, and is circumscribed only by the requirements that the legislation must contain a prohibition accompanied by a penal sanction and must be directed at a legitimate public health evil”. The AIDA contains some prohibitions and provides for both administrative monetary penalties (AMPs). Because the AIDA focuses on “high impact” AI systems, there is an argument that it is meant to target and address those systems that have the potential to cause the most harm to health or safety. (Of course, the bill does not define “high impact” systems, so this is only conjecture.) Yet, although AMPs are available in cases of egregious non-compliance with the AIDA’s requirements, AMPs are not criminal sanctions, they are “a civil (rather than quasi-criminal) mechanism for enforcing compliance with regulatory requirements”, as noted in a report from the Ontario Attorney-General. That leaves a smattering of offences such as obstructing the work of the Minister or of auditors; knowingly designing, developing or using an AI system where the data were obtained as a result of an offence under another Act; being reckless as to whether the use of an AI system made available by the accused is likely to cause harm to an individual, and using AI intentionally to defraud the public and cause substantial economic loss to an individual. Certainly, such offences are criminal in nature and could be supported by the federal criminal law power. Yet they are easily severable from the rest of the statute. For the most part, the AIDA focuses on “establishing common requirements applicable across Canada, for the design, development and use of [AI] systems” (AIDA, s. 4).

The provinces have not been falling over themselves to regulate AI, although neither have they been entirely inactive. Ontario, for example, has been developing a framework for the public sector use of AI, and Quebec has enacted some provisions relating to automated decision-making systems in its new data protection law. Nevertheless, these steps are clearly not enough to satisfy a federal government anxious to show leadership in this area. It is thus unsurprising that Canada’s federal government has introduced legislation to regulate AI. What is surprising is that they have done so without consultation – either regarding the form of the intervention or the substance. We have yet to have an informed national conversation about AI. Further, legislation of this kind was only one option. The government could have consulted and convened experts to develop something along the lines of the US’s NIST Framework that could be adopted as a common standard/approach across jurisdictions in Canada. A Canadian framework could have been supported by the considerable work on standards already ongoing. Such an approach could have involved the creation of an agency under the authority of a properly-empowered Data Commissioner to foster co-operation in the development of national standards. This could have supported the provinces in the harmonized regulation of AI. Instead, the government has chosen to regulate AI itself through a clumsy bill that staggers uneasily between constitutional heads of power, and that leaves its normative core to be crafted in a raft of regulations that may take years to develop. It also leaves it open to the first company to be hit with an AMP to challenge the constitutionality of the framework as a whole.

Teresa Scassa

Latest from Teresa Scassa

Related items (by tag)

back to top