Teresa Scassa - Blog

Monday, 15 August 2022 08:27

Regulated Activities and Data Under Bill C-27's AI and Data Act

Written by  Teresa Scassa
Rate this item
(0 votes)

This is the second in a series of posts on Bill C-27’s proposed Artificial Intelligence and Data Act (AIDA). The first post looked at the scope of application of the AIDA. This post considers what activities and what data will be subject to governance.

Bill C-27’s proposed Artificial Intelligence and Data Act (AIDA) governs two categories of “regulated activity” so long as they are carried out “in the course of international or interprovincial trade and commerce”. These are set out in s. 5(1):

(a) processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system;

(b) designing, developing or making available for use an artificial intelligence system or managing its operations.

These activities are cast in broad terms, capturing activities related both to the general curating of the data that fuel AI, and the design, development, distribution and management of AI systems. The obligations in the statute do not apply universally to all engaged in the AI industry. Instead, different obligations apply to those performing different roles. The chart below identifies the actor in the left-hand column, and the obligation the column on the right.

 

Actor

Obligation

A person who carries out any regulated activity and who processes or makes available for use anonymized data in the course of that activity

(see definition of “regulated activity” in s. 5(1)

s. 6 (data anonymization, use and management)

s. 10 (record keeping regarding measures taken under s. 6)

A person who is responsible for an artificial intelligence system (see definition of ‘person responsible’ in s. 5(2)

s. 7 (assess whether a system is high impact)

s. 10 (record keeping regarding reasons supporting their assessment of whether the system is high-impact under s. 7)

A person who is responsible for a high-impact system (see definition of ‘person responsible’ in s. 5(2; definition of “high-impact” system, s. 5(1))

s. 8 (measures to identify, assess and mitigate risk of harm or biased output)

s. 9 (measures to monitor compliance with the mitigation measures established under s. 8 and the effectiveness of the measures

s. 10 (record keeping regarding measures taken under ss. 8 and 9)

s. 12 (obligation to notify the Minister as soon as feasible if the use of the system results or is likely to result in material harm)

A person who makes available for use a high-impact system

s. 11(1) (publish a plain language description of the system and other required information)

A person who manages the operation of a high-impact system

s. 11(2) (publish a plain language description of how the system is used and other required information)

 

For most of these provisions, the details of what is actually required by the identified actor will depend upon regulations that have yet to be drafted.

A “person responsible” for an AI system is defined in s. 5(2) of the AIDA in these terms:

5(2) For the purposes of this Part, a person is responsible for an artificial intelligence system, including a high-impact system, if, in the course of international or interprovincial trade and commerce, they design, develop or make available for use the artificial intelligence system or manage its operation.

Thus, the obligations in ss. 7, 8, 9, 10 and 11, apply only to those engaged in the activities described in s. 5(1)(b) (designing, developing or making available an AI system or managing its operation). Further, it is important to note that with the exception of sections 6 and 7, the obligations in the AIDA also apply only to ‘high impact’ systems. The definition of a high-impact system has been left to regulations and is as yet unknown.

Section 6 stands out somewhat as a distinct obligation relating to the governance of data used in AI systems. It applies to a person who carries out a regulated activity and who “processes or makes available for use anonymized data in the course of that activity”. Of course, the first part of the definition of a regulated activity includes someone who processes or makes available for use “any data relating to human activities for the purpose of designing, developing or using” an AI system. So, this obligation will apply to anyone “who processes or makes available for use anonymized data” (s. 6) in the course of “processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system” (s. 5(1)). Basically, then for s. 6 to apply, the anonymized data must be processed for the purposes of development of an AI system. All of this must also be in the course if international or interprovincial trade and commerce.

Note that the first of these two purposes involves data “related to human activities” that are used in AI. This is interesting. The new Consumer Privacy Protection Act (CPPA) that forms the first part of Bill C-27 will regulate the collection, use and disclosure of personal data in the course of commercial activity. However, it provides, in s. 6(5), that: “For greater certainty, this Act does not apply in respect of personal information that has been anonymized.” By using the phrase “data relating to human activities” instead of “personal data”, s. 5(1) of the AIDA clearly addresses human-derived data that fall outside the definition of personal information in the CPPA because of anonymization.

Superficially, at least, s. 6 of the AIDA appears to pick up the governance slack that arises where anonymized data are excluded from the scope of the CPPA. [See my post on this here]. However, for this to happen, the data have to be used in relation to an “AI system”, as defined in the legislation. Not all anonymized data will be used in this way, and much will depend on how the definition of an AI system is interpreted. Beyond that, the AIDA only applies to a ‘regulated activity’ which is one carried out in the course of international and inter-provincial trade and commerce. It does not apply outside the trade and commerce context, nor does it apply to any excluded actors [as discussed in my previous post here]. As a result, there remain clear gaps in the governance of anonymized data. Some of those gaps might (eventually) be filled by provincial governments, and by the federal government with respect to public-sector data usage. Other gaps – e.g., with respect to anonymized data used for purposes other than AI in the private sector context – will remain. Further, governance and oversight under the proposed CPPA will be by the Privacy Commissioner of Canada, an independent agent of Parliament. Governance under the AIDA (as will be discussed in a forthcoming post) is by the Minister of Industry and his staff, who are also responsible for supporting the AI industry in Canada. Basically, the treatment of anonymized data between the CPPA and the AIDA creates a significant governance gap in terms of scope, substance and process.

On the issue of definitions, it is worth making a small side-trip into ‘personal information’. The definition of ‘personal information’ in the AIDA provides that the term “has the meaning assigned by subsections 2(1) and (3) of the Consumer Privacy Protection Act.” Section 2(1) is pretty straightforward – it defines “personal information” as “information about an identifiable individual”. However, s. 2(3) is more complicated. It provides:

2(3) For the purposes of this Act, other than sections 20 and 21, subsections 22(1) and 39(1), sections 55 and 56, subsection 63(1) and sections 71, 72, 74, 75 and 116, personal information that has been de-identified is considered to be personal information.

The default rule for ‘de-identified’ personal information is that it is still personal information. However, the CPPA distinguishes between ‘de-identified’ (pseudonymized) data and anonymized data. Nevertheless, for certain purposes under the CPPA – set out in s. 2(3) – de-identified personal information is not personal information. This excruciatingly-worded limit on the meaning of ‘personal information’ is ported into the AIDA, even though the statutory provisions referenced in s. 2(3) are neither part of AIDA nor particularly relevant to it. Since the legislator is presumed not to be daft, then this must mean that some of these circumstances are relevant to the AIDA. It is just not clear how. The term “personal information” is used most significantly in the AIDA in the s. 38 offense of possessing or making use of illegally obtained personal information. It is hard to see why it would be relevant to add the CPPA s. 2(3) limit on the meaning of ‘personal information’ to this offence. If de-identified (not anonymized) personal data (from which individuals can be re-identified) are illegally obtained and then used in AI, it is hard to see why that should not also be captured by the offence.

 

Login to post comments

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law