Tags
access to information
AI
AIDA
AI governance
AI regulation
Ambush Marketing
artificial intelligence
big data
bill c11
Bill c27
copyright
data governance
data protection
data strategy
freedom of expression
Geospatial
geospatial data
intellectual property
Internet
internet law
IP
open courts
open data
open government
personal information
pipeda
Privacy
smart cities
trademarks
transparency
|
Friday, 07 June 2024 12:58
Submission to Consultation on Ontario's Bill 194: Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024On May 13, 2024, the Ontario government introduced Bill 194. The bill addresses a catalogue of digital issues for the public sector. These include: cybersecurity, artificial intelligence governance, the protection of the digital information of children and youth, and data breach notification requirements. Consultation on the Bill closes on June 11, 2024. Below is my submission to the consultation. The legislature has now risen for the summer, so debate on the bill will not be moving forward now until the fall.
Submission to the Ministry of Public and Business Service Delivery on the Consultation on proposed legislation: Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024 Teresa Scassa, Canada Research Chair in Information Law and Policy, University of Ottawa June 4, 2024 I am a law professor at the University of Ottawa, where I hold the Canada Research Chair in Information Law and Policy. I research and write about legal issues relating to artificial intelligence and privacy. My comments on Bill 194 are made on my own behalf. The Enhancing Digital Security and Trust Act, 2024 has two schedules. Schedule 1 has three parts. The first relates to cybersecurity, the second to the use of AI in the broader public service, and the third to the use of digital technology affecting individuals under 18 years of age in the context of Children’s Aid Societies and School Boards. Schedule 2 contains a series of amendments to the Freedom of Information and Protection of Privacy Act (FIPPA). My comments are addressed to each of the Schedules. Please note that all examples provided as illustrations are my own. Summary Overall, I consider this to be a timely Bill that addresses important digital technology issues facing Ontario’s public sector. My main concerns relate to the sections on artificial intelligence (AI) systems and on digital technologies affecting children and youth. I recommend the addition of key principles to the AI portion of the Bill in both a reworked preamble and a purpose section. In the portion dealing with digital technologies and children and youth, I note the overlap created with existing privacy laws, and recommend reworking certain provisions so that they enhance the powers and oversight of the Privacy Commissioner rather than creating a parallel and potentially conflicting regime. I also recommend shifting the authority to prohibit or limit the use of certain technologies in schools to the Minister of Education and to consider the role of public engagement in such decision-making. A summary of recommendations is found at the end of this document. Schedule 1 - Cybersecurity The first section of the Enhancing Digital Security and Trust Act (EDSTA) creates a framework for cybersecurity obligations that is largely left to be filled by regulations. Those regulations may also provide for the adoption of standards. The Minister will be empowered to issue mandatory Directives to one or more public sector entities. There is little detail provided as to what any specific obligations might be, although section 2(1)(a) refers to a requirement to develop and implement “programs for ensuring cybersecurity” and s. 2(1)(c) anticipates requirements on public sector entities to submit reports to the minister regarding cyber security incidents. Beyond this, details are left to regulations. These details may relate to roles and responsibilities, reporting requirements, education and awareness measures, response and recovery measures, and oversight. The broad definition of a “public sector entity” to which these obligations apply includes hospitals, school boards, government ministries, and a wide range of agencies, boards and commissions at the provincial and municipal level. This scope is important, given the significance of cybersecurity concerns. Although there is scant detail in Bill 194 regarding actual cyber security requirements, this manner of proceeding seems reasonable given the very dynamic cybersecurity landscape. A combination of regulations and standards will likely provide greater flexibility in a changeable context. Cybersecurity is clearly in the public interest and requires setting rules and requirements with appropriate training and oversight. This portion of Bill 194 would create a framework for doing this. This seems like a reasonable way to address public sector cybersecurity, although, of course, the effectiveness will depend upon the timeliness and the content of any regulations. Schedule 1 – Use of Artificial Intelligence Systems Schedule 1 of Bill 194 also contains a series of provisions that address the use of AI systems in the public sector. These will apply to AI systems that meet a definition that maps onto the Organization for Economic Co-operation and Development (OECD) definition. Since this definition is one to which many others are being harmonized (including a proposed amendment to the federal AI and Data Act, and the EU AI Act), this seems appropriate. The Bill goes on to indicate that the use of an AI system in the public sector includes the use of a system that is publicly available, that is developed or procured by the public sector, or that is developed by a third party on behalf of the public sector. This is an important clarification. It means, for example, that the obligations under the Act could apply to the use of general-purpose AI that is embedded within workplace software, as well as purpose-built systems. Although the AI provisions in Bill 194 will apply to “public service entities” – defined broadly in the Bill to include hospitals and school boards as well as both federal and municipal boards, agencies and commissions – the AI provisions will only apply to a public sector entity that is “prescribed for the purposes of this section if they use or intend to use an artificial intelligence system in prescribed circumstances” (s. 5(1)). The regulations also might apply to some systems (e.g., general purpose AI) only when they are being used for a particular purpose (e.g., summarizing or preparing materials used to support decision-making). Thus, while potentially quite broad in scope, the actual impact will depend on which public sector entities – and which circumstances – are prescribed in the regulations. Section 5(2) of Bill 194 will require a public sector entity to which the legislation applies to provide information to the public about the use of an AI system, but the details of that information are left to regulations. Similarly, there is a requirement in s. 5(3) to develop and implement an accountability framework, but the necessary elements of the framework are left to regulations. Under s. 5(4) a public sector entity to which the Act applies will have to take steps to manage risks in accordance with regulations. It may be that the regulations will be tailored to different types of systems posing different levels of risk, so some of this detail would be overwhelming and inflexible if included in the law itself. However, it is important to underline just how much of the normative weight of this law depends on regulations. Bill 194 will also make it possible for the government, through regulations, to prohibit certain uses of AI systems (s. 5(6) and s. 7(f) and (g)). Interestingly, what is contemplated is not a ban on particular AI systems (e.g., facial recognition technologies (FRT)); rather, it is potential ban on particular uses of those technologies (e.g., FRT in public spaces). Since the same technology can have uses that are beneficial in some contexts but rights-infringing in others, this flexibility is important. Further, the ability to ban certain uses of FRT on a province-wide basis, including at the municipal level, allows for consistency across the province when it comes to issues of fundamental rights. Section 6 of the bill provides for human oversight of AI systems. Such a requirement would exist only when a public entity uses an AI system in circumstances set out in the regulations. The obligation will require oversight in accordance with the regulations and may include additional transparency obligations. Essentially, the regulations will be used to customize obligations relating to specific systems or uses of AI for particular purposes. Like the cybersecurity measures, the AI provisions in Bill 194 leave almost all details to regulations. Although I have indicated that this is an appropriate way to address cybersecurity concerns, it may be less appropriate for AI systems. Cybersecurity is a highly technical area where measures must adapt to a rapidly evolving security landscape. In the cybersecurity context, the public interest is in the protection of personal information and government digital and data infrastructures. Risks are either internal (having to do with properly training and managing personnel) or adversarial (where the need is for good security measures to be in place). The goal is to put in place measures that will ensure that the government’s digital systems are robust and secure. This can be done via regulations and standards. By contrast, the risks with AI systems will flow from decisions to deploy them, their choice and design, the data used to train the systems, and their ongoing assessment and monitoring. Flaws at any of these stages can lead to errors or poor functioning that can adversely impact a broad range of individuals and organizations who may interact with government via these systems. For example, an AI chatbot that provides information to the public about benefits or services, or an automated decision-making system for applications by individuals or businesses for benefits or services, interacts with and impacts the public in a very direct way. Some flaws may lead to discriminatory outcomes that violate human rights legislation or the Charter. Others may adversely impact privacy. Errors in output can lead to improperly denied (or allocated) benefits or services, or to confusion and frustration. There is therefore a much more direct impact on the public, with effects on both groups and individuals. There are also important issues of transparency and trust. This web of considerations makes it less appropriate to leave the governance of AI systems entirely to regulations. The legislation should, at the very least, set out the principles that will guide and shape those regulations. The Ministry of Public and Business Service Delivery has already put considerable work into developing a Trustworthy AI Framework and a set of (beta) principles. This work could be used to inform guiding principles in the statute. Currently, the guiding principles for the whole of Bill 194 are found in the preamble. Only one of these directly relates to the AI portion of the bill, and it states that “artificial intelligence systems in the public sector should be used in a responsible, transparent, accountable and secure manner that benefits the people of Ontario while protecting privacy”. Interestingly, this statement only partly aligns with the province’s own beta Principles for Ethical Use of AI. Perhaps most importantly, the second of these principles, “good and fair”, refers to the need to develop systems that respect the “rule of law, human rights, civil liberties, and democratic values”. Currently, Bill 194 is entirely silent with respect to issues of bias and discrimination (which are widely recognized as profoundly important concerns with AI systems, and which have been identified by Ontario’s privacy and human rights commissioners as a concern). At the very least, the preamble to Bill 194 should address these specific concerns. Privacy is clearly not the only human rights consideration at play when it comes to AI systems. The preamble to the federal government’s Bill C-27, which contains the proposed Artificial Intelligence and Data Act, states: “that artificial intelligence systems and other emerging technologies should uphold Canadian norms and values in line with the principles of international human rights law”. The preamble to Bill 194 should similarly address the importance of human rights values in the development and deployment of AI systems for the broader public sector. In addition, the bill would benefit from a new provision setting out the purpose of the part dealing with public sector AI. Such a clause would shape the interpretation of the scope of delegated regulation-making power and would provide additional support for a principled approach. This is particularly important where legislation only provides the barest outline of a governance framework. In this regard, this bill is similar to the original version of the federal AI and Data Act, which was roundly criticized for leaving the bulk of its normative content to the regulation-making process. The provincial government’s justification is likely to be similar to that of the federal government – it is necessary to remain “agile”, and not to bake too much detail into the law regarding such a rapidly evolving technology. Nevertheless, it is still possible to establish principle-based parameters for regulation-making. To do so, this bill should more clearly articulate the principles that guide the adoption and use of AI in the broader public service. A purpose provision could read: The purpose of this Part is to ensure that artificial intelligence systems adopted and used by public sector entities are developed, adopted, operated and maintained in manner that is transparent and accountable and that respects the privacy and human rights of Ontarians. Unlike AIDA, the federal statute which will apply to the private sector, Bill 194 is meant to apply to the operations of the broader public service. The flexibility in the framework is a recognition of both the diversity of AI systems, and the diversity of services and activities carried out in this context. It should be noted, however, that this bill does not contemplate any bespoke oversight for public sector AI. There is no provision for a reporting or complaints mechanism for members of the public who have concerns with an AI system. Presumably they will have to complain to the department or agency that operates the AI system. Even then, there is no obvious requirement for the public sector entity to record complaints or to report them for oversight purposes. All of this may be provided for in s. 5(3)’s requirement for an accountability framework, but the details of this have been left to regulation. It is therefore entirely unclear from the text of Bill 194 or what recourse – if any – the public will have when they have problematic encounters with AI systems in the broader public service. Section 5(3) could be amended to read: 5(3) A public sector entity to which this section applies, shall, in accordance with the regulations, develop and implement an accountability framework respecting their use of the artificial intelligence system. At a minimum, such a framework will include: a) The specification of reporting channels for internal or external complaints or concerns about the operation of the artificial intelligence system; b) Record-keeping requirements for complaints and concerns raised under subparagraph 5(3)(a), as well as for responses thereto. Again, although a flexible framework for public sector AI governance may be an important goal, key elements of that framework should be articulated in the legislation. Schedule 1 – Digital Technology Affecting Individuals Under Age 18 The third part of Schedule 1 addresses digital technology affecting individuals under age 18. This part of Bill 194 applies to children’s aid societies and school boards. Section 9 enables the Lieutenant Governor in Council to make regulations regarding “prescribed digital information relating to individuals under age 18 that is collected, used, retained or disclosed in a prescribed manner”. Significantly, “digital information” is not defined in the Bill. The references to digital information are puzzling, as it seems to be nothing more than a subset of personal information – which is already governed under both the Municipal Freedom of Information and Protection of Privacy Act (MFIPPA) and FIPPA. Personal information is defined in both these statutes as “recorded information about an identifiable individual”. It is hard to see how “digital information relating to individuals under age 18” is not also personal information (which has received an expansive interpretation). If it is meant to be broader, it is not clear how. Further, the activities to which this part of Bill 194 will apply are the “collection, use, retention or disclosure” of such information. These are activities already governed by MFIPPA and FIPPA – which apply to school boards and children’s aid societies respectively. What Bill 194 seems to add is a requirement (in s. 9(b)) to submit reports to the Minister regarding the collection, use, retention and disclosure of such information, as well as the enablement of regulations in s. 9(c) to prohibit collection, use, retention or disclosure of prescribed digital information in prescribed circumstances, for prescribed purposes, or subject to certain conditions. Nonetheless, the overlap with FIPPA and MFIPPA is potentially substantial – so much so, that s. 14 provides that in case of conflict between this Act and any other, the other Act would prevail. What this seems to mean is that FIPPA and MFIPPA will trump the provisions of Bill 194 in case of conflict. Where there is no conflict, the bill seems to create an unnecessary parallel system for governing the personal information of children. The need for more to be done to protect the personal information of children and youth in the public school system is clear. In fact, this is a strategic priority of the current Information and Privacy Commissioner (IPC), whose office has recently released a Digital Charter for public schools setting out voluntary commitments that would improve children’s privacy. The IPC is already engaged in this area. Not only does the IPC have the necessary expertise in the area of privacy law, the IPC is also able to provide guidance, accountability and independent oversight. In any event, since the IPC will still have oversight over the privacy practices of children’s aid societies and school boards notwithstanding Bill 194, the new system will mean that these entities will have to comply with regulations set by the Minister on the one hand, and the provisions of FIPPA and MFIPPA on the other. The fact that conflicts between the two regimes will be resolved in favour of privacy legislation means that it is even conceivable that the regulations could set requirements or standards that are lower than what is required under FIPPA or MFIPPA – creating an unnecessarily confusing and misleading system. Another odd feature of the scheme is that Bill 194 will require “reports to be submitted to the Minister or a specified individual in respect of the collection, use, retention and disclosure” of digital information relating to children or youth (s. 9(b)). It is possible that the regulations will specify that it is the Privacy Commissioner to whom the reports should be submitted. If it is, then it is once again difficult to see why a parallel regime is being created. If it is not, then the Commissioner will be continuing her oversight of privacy in schools and children’s aid societies without access to all the relevant data that might be available. It seems as if Bill 194 contemplates two separate sets of measures. One addresses the proper governance of the digital personal information of children and youth in schools and children’s aid societies. This is a matter for the Privacy Commissioner, who should be given any additional powers she requires to fulfil the government’s objectives. Sections 9 and 10 of Bill 194 could be incorporated into FIPPA and MFIPPA, with modifications to require reporting to the Privacy Commissioner. This would automatically bring oversight and review under the authority of the Privacy Commissioner. The second objective of the bill seems to be to provide the government with the opportunity to issue directives regarding the use of certain technologies in the classroom or by school boards. This is not unreasonable, but it is something that should be under the authority of the Minister of Education (not the Minister of Public and Business Service Delivery). It is also something that might benefit from a more open and consultative process. I would recommend that the framework be reworked accordingly. Schedule 2: FIPPA Amendments Schedule 2 consists of amendments to the Freedom of Information and Protection of Privacy Act. These are important amendments that will introduce data breach notification and reporting requirements for public sector entities in Ontario that are governed by FIPPA (although, interestingly, not those covered by MFIPPA). For example, a new s. 34(2)(c.1) will require the head of an institution to include in their annual report to the Commissioner “the number of thefts, losses or unauthorized uses or disclosures of personal information recorded under subsection 40.1”. The new subsection 40.1(8) will require the head of an institution to keep a record of any such data breach. Where a data breach reaches the threshold of creating a “real risk that a significant harm to an individual would result” (or where any other circumstances prescribed in regulations exist), a separate report shall be made to the Commissioner under s. 40.1(1). This report must be made “as soon as feasible” after it has been determined that the breach has taken place (s. 40.1(2)). New regulations will specify the form and contents of the report. There is a separate requirement for the head of the institution to notify individuals affected by any breach that reaches the threshold of a real risk of significant harm (s. 40.1(3)). The notification to the individual will have to contain, along with any prescribed information, a statement that the individual is entitled to file a complaint with the Commissioner with respect to the breach, and the individual will have one year to do so (ss. 40.1(4) and (5)). The amendments also identify the factors relevant in determining if there is a real risk of significant harm (s. 40.1(7)). The proposed amendments also provide for a review by the Commissioner of the information practices of an institution where a complaint has been filed under s. 40.1(4), or where the Commissioner “has other reason to believe that the requirements of this Part are not being complied with” (s. 49.0.1).) The Commissioner can decide not to review an institution’s practices in circumstances set out in s. 49.0.1(3). Where the Commissioner determines that there has been a contravention of the statutory obligations, she has order-making powers (s. 49.0.1(7)). Overall, this is a solid and comprehensive scheme for addressing data breaches in the public sector (although it does not extend to those institutions covered by MFIPPA). In addition to the data breach reporting requirements, the proposed amendments will provide for whistleblower protections. They will also specifically enable the Privacy Commissioner to consult with other privacy commissioners (new s. 59(2)), and to coordinate activities, enter into agreements, and to provide for handling “of any complaint in which they are mutually interested.” (s. 59(3)). These are important amendments given that data breaches may cross provincial lines, and Canada’s privacy commissioners have developed strong collaborative relationships to facilitate cooperation and coordination on joint investigations. These provisions make clear that such co-operation is legally sanctioned, which may avoid costly and time-consuming court challenges to the commissioners’ authority to engage in this way. The amendments also broaden s. 61(1)(a) of FIPPA which currently makes it an offence to wilfully disclose personal information in contravention of the Act. If passed, it will be an offence to wilfully collect, use or disclose information in the same circumstances. Collectively the proposed FIPPA amendments are timely and important. Summary of Recommendations: On artificial intelligence in the broader public sector: 1. Amend the Preamble to Bill 194 to address the importance of human rights values in the development and deployment of AI systems for the broader public sector.
2. Add a purpose section to the AI portion of Bill 194 that reads: The purpose of this Part is to ensure that artificial intelligence systems adopted and used by public sector entities are developed, adopted, operated and maintained in manner that is transparent and accountable and that respects the privacy and human rights of Ontarians. 3. Amend s. 5(3) to read: 5(3) A public sector entity to which this section applies, shall, in accordance with the regulations, develop and implement an accountability framework respecting their use of the artificial intelligence system. At a minimum, such a framework will include: a) The specification of reporting channels for internal or external complaints or concerns about the operation of the artificial intelligence system; b) Record-keeping requirements for complaints and concerns raised under subparagraph 5(3)(a), as well as for responses thereto.
On Digital Technology Affecting Individuals Under Age 18: 1. Incorporate the contents of ss. 9 and 10 into FIPPA and MFIPPA, with the necessary modification to require reporting to the Privacy Commissioner. 2. Give the authority to issue directives regarding the use of certain technologies in the classroom or by school boards to the Minister of Education and ensure that an open and consultative public engagement process is included.
Published in
Privacy
Monday, 11 December 2023 06:58
Data Governance for AI under Canada's Proposed AI and Data Act (AIDA Amendments Part IV)The federal government’s proposed Artificial Intelligence and Data Act (AIDA) (Part III of Bill C-27) - contained some data governance requirements for anonymized data used in AI in its original version. These were meant to dovetail with changes to PIPEDA reflected in the Consumer Privacy Protection Act (CPPA) (Part I of Bill C-27). The CPPA provides in s. 6(5) that “this Act does not apply in respect of personal information that has been anonymized.” Although no such provision is found in PIPEDA, this is, to all practical effects, the state of the law under PIPEDA. PIPEDA applies to “personal information”, which is defined as “information about an identifiable individual”. If someone is not identifiable, then it is not personal information, and the law does not apply. This was the conclusion reached, for example, in the 2020 Cadillac Fairview joint finding of the federal Privacy Commissioner and his counterparts from BC and Alberta. PIPEDA does apply to pseudonymized information because such information ultimately permits reidentification. The standard for identifiability under PIPEDA had been set by the courts as a “’serious possibility’ that an individual could be identified through the use of that information, alone or in combination with other available information.” (Cadillac Fairview at para 143). It is not an absolute standard (although the proposed definition for anonymized data in C-27 currently seems closer to absolute). In any event, the original version of AIDA was meant to offer comfort to those concerned with the flat-out exclusion of anonymized data from the scope of the CPPA. Section 6 of AIDA provided that: 6. A person who carries out any regulated activity and who processes or makes available anonymized data in the course of that activity must, in accordance with the regulations, establish measures with respect to (a) the manner in which data is anonymized; and (b) the use or management of anonymized data. Problematically, however, AIDA only provided for data governance with respect to this particular subset of data. It contained no governance requirements for personal, pseudonymized, or non-personal data. Artificial intelligence systems will be only as good as the data on which they are trained. Data governance is a fundamental element of proper AI regulation – and it must address more than anonymized personal data. This is an area where the amendments to AIDA proposed by the Minister of Industry demonstrate clear improvements over the original version. To begin with, the old s. 6 is removed from AIDA. Instead of specific governance obligations for anonymized data, we see some new obligations introduced regarding data more generally. For example, as part of the set of obligations relating to general-purpose AI systems, there is a requirement to ensure that “measures respecting the data used in developing the system have been established in accordance with the regulations” (s. 7(1)a)). There is also an obligation to maintain records “relating to the data and processes used in developing the general-purpose system and in assessing the system’s capabilities and limitations” (s. 7(2)(b)). There are similar obligations the case of machine learning models that are intended to be incorporated into high-impact systems (s. 9(1)(a) and 9(2)(a)). Of course, whether this is an actual improvement will depend on the content of the regulations. But at least there is a clear signal that data governance obligations are expanded under the proposed amendments to AIDA. Broader data governance requirements in AIDA are a good thing. They will apply to data generally including personal and anonymized data. Personal data used in AI will also continue to be governed under privacy legislation and privacy commissioners will still have a say about whether data have been properly anonymized. In the case of PIPEDA (or the CPPA if and when it is eventually enacted), the set of principles for the development and use of generative AI issued by federal, provincial, and territorial privacy commissioners on December 8, 2023 make it clear that the commissioners understand their enabling legislation to provide them with the authority to govern a considerable number of issues relating to the use of personal data in AI, whether in the public or private sector. This set of principles send a strong signal to federal and provincial governments alike that privacy laws and privacy regulators have a clear role to play in relation to emerging and evolving AI technologies and that the commissioners are fully engaged. It is also an encouraging example of federal, provincial and territorial co-operation among regulators to provide a coherent common position on key issues in relation to AI governance.
Published in
Privacy
Friday, 08 December 2023 09:00
Oversight and Enforcement in the AIDA Amendments (Part III of a series)This is Part III of a series of posts that look at the proposed amendments to Canada’s Artificial Intelligence and Data Act (which itself is still a Bill, currently before the INDU Committee for study). Part I provided a bit of context and a consideration of some of the new definitions in the Bill. Part II looked at the categories of ‘high-impact’ AI that the Bill now proposes to govern. This post looks at the changed role of the AI and Data Commissioner.
The original version of the Artificial Intelligence and Data Act (Part II of Bill C-27) received considerable criticism for its oversight mechanisms. Legal obligations for the ethical and transparent governance of AI, after all, depend upon appropriate oversight and enforcement for their effectiveness. Although AIDA proposed the creation of an AI and Data Commissioner (Commissioner), this was never meant to be an independent regulator. Ultimately, AIDA placed most of the oversight obligations in the hands of the Minister of Industry – the same Minister responsible for supporting the growth of Canada’s AI sector. Critics considered this to be a conflict of interest. A series of proposed amendments to AIDA are meant to address these concerns by reworking the role of the Commissioner. Section 33(1) of AIDA makes it clear that the AI and Data Commissioner will be a “senior official of the department over which the Minister presides”, and their appointment involves being designated by the Minister. This has not changed, although the amendments would delete from this provision language stating that the Commissioner’s role is “to assist the Minister in the administration and enforcement” of AIDA. The proposed amendments elevate the Commissioner somewhat, giving them a series of powers and duties, to which the Minister can add through delegation (s. 33(3)). So, for example, it will be the newly empowered Commissioner (Commissioner 2.0) who receives reports from those managing a general-purpose or high impact system where there are reasonable grounds to suspect that the use of the system has caused serious harm (s. 8.2(1)(e), s. 11(1)(g)). Commissioner 2.0 can also order someone managing or making available a general-purpose system to provide them with the accountability framework they are required to create under s. 12 (s. 13(1)) and can provide guidance or recommend corrections to that framework (s. 13(2)). Commissioner 2.0 can compel those making available or managing an AI system to provide the Commissioner with an assessment of whether the system is high impact, and in relation to which subclass of high impact systems set out in the schedule. Commissioner 2.0 can agree or disagree with the assessment, although if they disagree, their authority seems limited to informing the entity in writing with their reasons for disagreement. More significant are Commissioner 2.0’s audit powers. Under the original version of AIDA, these were to be exercised by the Minister – the powers are now those of the Commissioner (s. 15(1)). Further, Commissioner 2.0 may order (previously this was framed as “require”) that the person either conduct an audit themselves or that the person engage the services of an independent auditor. The proposed amendments also empower the Commissioner to conduct an audit to determine if there is a possible contravention of AIDA. This strengthens the audit powers by ensuring that there is at least an option that is not at least somewhat under the control of the party being audited. The proposed amendments give Commissioner 2.0 additional powers necessary to conduct an audit and to carry out testing of an AI system (s. 15(2.1)). Where Commissioner 2.0 conducts an audit, they must provide the audited party with a copy of the report (s. 15(3.1)) and where the audit is conducted by the person responsible or someone retained by them, they must provide a copy to the Commissioner (s. 15(4)). The Minister still retains some role with respect to audits. He or she may request that the Commissioner conduct an audit. In an attempt to preserve some independence of Commissioner 2.0, the Commissioner, when receiving such a request, may either carry out the audit or decline to do so on the basis that there are no reasonable grounds for an audit, so long as they provide the Minister with their reasons (s. 15.1(1)(b)). The Minister may also order a person to take actions to bring themselves into compliance with the law (s. 16) or to cease making available or terminate the operation of a system if the Minister considers compliance to be impossible (s. 16(b)) or has reasonable grounds to believe that the use of the system “gives rise to a risk of imminent and serious harm” (s. 17(1)). As noted above, Commissioner 2.0 (a mere employee in the Minister’s department) will have order making powers under the amendments. This is something the Privacy Commissioner of Canada, an independent agent of Parliament, appointed by the Governor in Council, is hoping to get in Bill C-27. If so, it will be for the first time since the enactment of PIPEDA in 2000. Orders of Commissioner 2.0 or the Minister can become enforceable as orders of the Federal Court under s. 20. Commissioner 2.0 is also empowered to share information with a list of federal or provincial government regulators where they have “reasonable grounds to believe that the information may be relevant to the administration or enforcement by the recipient of another Act of Parliament or of a provincial legislature.” (s. 26(1)). Reciprocally, under a new provision, federal regulators may also share information with the Commissioner (s. 26.1). Additionally, Commissioner 2.0 may “enter into arrangements” with different federal regulators and/or the Ministers of Health and Transport in order to assist those actors with the “exercise of their powers or the performance of their functions and duties” in relation to AI (s. 33.1). These new provisions strengthen a more horizontal, multi-regulator approach to governing AI which is an improvement in the Bill, although this might eventually need to be supplemented by corresponding legislative amendments – and additional funding – to better enable the other commissioners to address AI-related issues that fit within their areas of competence. The amendments also impose upon Commissioner 2.0 a new duty to report on the administration and enforcement of AIDA – such a report is to be “published on a publicly available website”. (s. 35.1) The annual reporting requirement is important as it will increase transparency regarding the oversight and enforcement of AIDA. For his or her part, the Minister is empowered to publish information, where it is in the public interest, regarding any contravention of AIDA or where the use of a system gives rise to a serious risk of imminent harm (ss. 27 and 28). Interestingly, AIDA, which provides for the potential imposition of administrative monetary penalties for contraventions of the Act does not indicate who is responsible for setting and imposing these penalties. Section 29(1)(g) makes it clear that “the persons or classes of persons who may exercise any power, or perform any duty or function, in relation to the [AMP] scheme” is left to be articulated in regulations. The AIDA also makes it an offence under s. 30 for anyone to obstruct or provide false or misleading information to “the Minister, anyone acting on behalf of the Minister or an independent auditor in the exercise of their powers or performance of their duties or functions under this Part.” This remains unchanged from the original version of AIDA. Presumably, since Commissioner 2.0 would exercise a great many of the oversight functions, this is meant to apply to the obstruction or misleading of the Commissioner – but it will only do so if the Commissioner is characterized as someone “acting on behalf of the Minister”. This is not language of independence, but then there are other features of AIDA that also counter any view that even Commissioner 2.0 is truly independent (and I mean others besides the fact that they are an employee under the authority of the Minister and handpicked by the Minister). Most notable of these is that should the Commissioner become incapacitated or absent, or should they simply never be designated by the Minister, it is the Minister who will exercise their powers and duties (s. 33(4)). In sum, then, the proposed amendments to AIDA attempt to give some separation between the Minister and Commissioner 2.0 in terms of oversight and enforcement. At the end of the day, however, Commissioner 2.0 is still the Minister’s hand-picked subordinate. Commissioner 2.0 does not serve for a specified term and has no security of tenure. In their absence, the Minister exercises their powers. It falls far short of independence.
Published in
Privacy
Wednesday, 06 December 2023 07:16
High-Impact AI Under AIDA's Proposed Amendments (Part II of a Series)My previous post looked at some of the new definitions in the proposed amendments to the Artificial Intelligence and Data Act (AIDA) which is Part III of Bill C-27. These include a definition of “high impact” AI, and a schedule of classes of high-impact AI (the Schedule is reproduced at the end of this post). The addition of the schedule changes AIDA considerably, and that is the focus of this post. The first two classes in the Schedule capture contexts that can clearly affect individuals. Class 1 addresses AI used in most aspects of employment, and Class 2 relates to the provision of services. On the provision of services (which could include things like banking and insurance), the wording signals that it will apply to decision-making about the provision of services, their cost, or the prioritization of recipients. To be clear, AIDA does not prohibit systems with these functions. They are simply characterized as “high impact” so that they will be subject to governance obligations. A system to determine creditworthiness can still reject individuals; and companies can still prioritize preferred customers – as long as the systems are sufficiently transparent, free from bias and do not cause harm. There is, however, one area which seems to fall through the cracks of Classes 1 & 2: rental accommodation. A lease is an interest in land – it is not a service. Human rights legislation in Canada typically refers to accommodation separately from services for this reason. AI applications are already being used to screen and select tenants for rental accommodation. In the midst of a housing crisis, this is surely an area that is high-impact and where the risks of harm from flawed AI to individuals and families searching for a place to live are significant. This gap needs to be addressed – perhaps simply by adding “or accommodation” after each use of the term “service” in Class 2. Class 3 rightly identifies biometric systems as high risk. It also includes systems that use biometrics in “the assessment of an individual’s behaviour or state of mind.” Key to the scope of this section will be the definition of “biometric”. Some consider biometric data to be exclusively physiological data (fingerprints, iris scans, measurements of facial features, etc.). Yet others include behavioral data in this class if it is used for the second identified purpose – the assessment of behaviour or state of mind. Behavioural data, though, is potentially a very broad category. It can include data about a person’s gait, or their speech or keystroke patterns. Cast even more broadly, it could include things such as “geo-location and IP addresses”, “purchasing habits”, “patterns of device use” or even “browser history and cookies”. If that is the intention behind Class 3, then conventional biometric AI should be Part One of this class; Part Two should be the use of an AI system to assess an individual’s behaviour or state of mind (without referring specifically to biometrics in order to avoid confusion). This would also, importantly, capture the highly controversial area of AI for affect recognition. It would be unfortunate if the framing of the class as ‘biometrics’ led to an unduly narrow interpretation of the kind of systems or data involved. The explanatory note in the Minister’s cover letter for this provision seems to suggest (although it is not clear) that it is purely physiological biometric data that is intended for inclusion and not a broader category. If this is so, then Class 3 seems unduly narrow. Class 4 is likely to be controversial. It addresses content moderation and the prioritization and presentation of content online and identifies these as high-impact algorithmic activities. Such systems are in widespread use in the online context. The explanatory note from the Minister observes that such systems “have important potential impacts on Canadians’ ability to express themselves, as well as pervasive effects at societal scale” (at p. 4). This is certainly true although the impact is less direct and obvious than the impact of a hiring algorithm, for example. Further, although an algorithm that presents a viewer of online streaming services with suggestions for content could have the effect of channeling a viewer’s attention in certain directions, it is hard to see this as “high impact” in many contexts, especially since there are multiple sources of suggestions for online viewing (including word of mouth). That does not mean that feedback loops and filter bubbles (especially in social media) do not contribute to significant social harms – but it does make this high impact class feel large and unwieldy. The Minister’s cover letter indicates that each of the high-impact classes presents “distinct risk profiles and consequently will require distinct risk management strategies.” (at p. 2). Further, he notes that the obligations that will be imposed “are intended to scale in proportion to the risks they present. A low risk use within a class would require correspondingly minimal mitigation effort.” (at p. 2). Much will clearly depend on regulations. Class 5 relates to the use of AI in health care or emergency services, although it explicitly excludes medical devices because these are already addressed by Health Canada (which recently consulted on the regulation of AI-enabled medical devices). This category also demonstrates some of the complexity of regulating AI in Canada’s federal system. Many hospital-based AI technologies are being developed by researchers affiliated with the hospitals and who are not engaged in the interprovincial or international trade and commerce which is necessary for AIDA to apply. AIDA will only apply to those systems developed externally and in the context of international or interprovincial trade and commerce. While this will still capture many applications, it will not capture all – creating different levels of governance within the same health care context. It is also not clear what is meant, in Class 5, by “use of AI in matters relating to health care”. This could be interpreted to mean health care that is provided within what is understood as the health care system. Understood more broadly, it could extend to health-related apps – for example, one of the many available AI-enabled sleep trackers, or an AI-enabled weight loss tool (to give just two examples). I suspect that what is intended is the former, even though, with health care in crisis and more people turning to alternate means to address their health issues, health-related AI technologies might well deserve to be categorized as high-impact. Class 6 involves the use of an AI system by a court or administrative body “in making a determination in respect of an individual who is a party to proceedings before the court or administrative body.” In the first place, this is clearly not meant to apply to automated decision-making generally – it seems to be limited to judicial or quasi-judicial contexts. Class 6 must also be reconciled with s. 3 of AIDA, which provides that AIDA does not apply “with respect to a government institution as defined in s. 3 of the Privacy Act.” This includes the Immigration and Refugee Board, for example, as well as the Canadian Human Rights Commission, the Parole Board, and the Veterans Review and Appeal Board. Making sense of this, then, it would be the tools used by courts or tribunals and developed or deployed in the course of interprovincial or international trade and commerce that would be considered high impact. The example given in the Minister’s letter seems to support this – it is of an AI system that provides an assessment of “risk of recidivism based on historical data” (at p. 5). However, Class 6 is confusing because it identifies the context rather than the tools as high impact. Note that the previous classes address the use of AI “in matters relating to” the subject matter of the class, whereas class 6 identifies actors – the use of AI by a court or tribunal. There is a different focus. Yet the same tools used by courts and tribunals might also be used by administrative bodies or agencies that do not hold hearings or that are otherwise excluded from the application of AIDA. For example, in Ewert v. Canada, the Supreme Court of Canada considered an appeal by a Métis man who challenged the use of recidivism-risk assessment tools by Correctional Services of Canada (to which AIDA would not apply according to s. 3). If this type of tool is high-risk, it is so whether it is used by Correctional Services or a court. This suggests that the framing of Class 6 needs some work. It should perhaps be reworded to identify tools or systems as high impact if they are used to determine the rights, entitlements or status of individuals. Class 7 addresses the use of an AI system to assist a peace officer “in the exercise and performance of their law enforcement powers, duties and function”. Although “peace officer” receives the very broad interpretation found in the Criminal Code, that definition is modified in the AIDA by language that refers to the exercise of specific law enforcement powers. This should still capture the use of a broad range of AI-enabled tools and technologies. It is an interesting question whether AIDA might apply more fulsomely to this class of AI systems (not just those developed in the course of interprovincial or international trade) as it might be considered to be rooted in the federal criminal law power. These, then, are the different classes that are proposed initially to populate the Schedule if AIDA and its amendments are passed. The list is likely to spark debate, and there is certainly some wording that could be improved. And, while it provides much greater clarity as to what is proposed to be regulated, it is also evident that the extent to which obligations will apply will likely be further tailored in regulations to create sliding scales of obligation depending on the degree of risk posed by any given system.
AIDA Schedule: High-Impact Systems — Uses 1. The use of an artificial intelligence system in matters relating to determinations in respect of employment, including recruitment, referral, hiring, remuneration, promotion, training, apprenticeship, transfer or termination. 2. The use of an artificial intelligence system in matters relating to (a) the determination of whether to provide services to an individual; (b) the determination of the type or cost of services to be provided to an individual; or (c) the prioritization of the services to be provided to individuals. 3. The use of an artificial intelligence system to process biometric information in matters relating to (a) the identification of an individual, other than in cases in which the biometric information is processed with the individual’s consent to authenticate their identity; or (b) the assessment of an individual’s behaviour or state of mind. 4. The use of an artificial intelligence system in matters relating to (a) the moderation of content that is found on an online communications platform, including a search engine or social media service; or (b) the prioritization of the presentation of such content.
5. The use of an artificial intelligence system in matters relating to health care or emergency services, excluding a use referred to in any of paragraphs (a) to (e) of the definition device in section 2 of the Food and Drugs Act that is in relation to humans. 6. The use of an artificial intelligence system by a court or administrative body in making a determination in respect of an individual who is a party to proceedings before the court or administrative body. 7. The use of an artificial intelligence system to assist a peace officer, as defined in section 2 of the Criminal Code, in the exercise and performance of their law enforcement powers, duties and functions.
Published in
Privacy
Tuesday, 11 April 2023 07:30
Comparing the UK's proposal for AI governance to Canada's AI bill
The government of the United Kingdom has published a consultation paper seeking input into its proposal for AI regulation. The paper is aptly titled A pro-innovation approach to AI regulation, since it restates that point insistently throughout the document. The UK proposal provides an interesting contrast to Canada’s AI governance bill currently before Parliament. Both Canada and the UK set out to regulate AI systems with the twin goals of supporting innovation on the one hand, and building trust in AI on the other. (Note here that the second goal is to build trust in AI, not to protect the public. Although the protection of the public is acknowledged as one way to build trust, there is a subtle distinction here). However, beyond these shared goals, the proposals are quite different. Canada’s approach in Part 3 of Bill C-27 (the Artificial Intelligence and Data Act (AIDA)) is to create a framework to regulate as yet undefined “high impact” AI. The definition of “high impact” as well as many other essential elements of the bill are left to be articulated in regulations. According to a recently published companion document to the AIDA, leaving so much of the detail to regulations is how the government proposes to keep the law ‘agile’ – i.e. capable of responding to a rapidly evolving technological context. The proposal would also provide some governance for anonymized data by imposing general requirements to document the use of anonymized personal information in AI innovation. The Minister of Innovation is made generally responsible for oversight and enforcement. For example, the AIDA gives the Minister of Innovation the authority (eventually) to impose stiff administrative monetary penalties on bad actors. The Canadian approach is similar to that in the EU AI Act in that it aims for a broad regulation of AI technologies, and it chooses legislation as the vehicle to do so. It is different in that the EU AI Act is far more detailed and prescriptive; the AIDA leaves the bulk of its actual legal requirements to be developed in regulations. The UK proposal is notably different from either of these approaches. Rather than create a new piece of legislation and/or a new regulatory authority, the UK proposes to set out five principles for responsible AI development and use. Existing regulators will be encouraged and, if necessary, specifically empowered, to regulate AI according to these principles within their spheres of regulatory authority. Examples of regulators who will be engaged in this framework include the Information Commissioner’s Office, regulators for human rights, consumer protection, health care products and medical devices, and competition law. The UK scheme also accepts that there may need to be an entity within government that can perform some centralized support functions. These may include monitoring and evaluation, education and awareness, international interoperability, horizon scanning and gap analysis, and supporting testbeds and sandboxes. Because of the risk that some AI technologies or issues may fall through the cracks between existing regulatory schemes, the government anticipates that regulators will assist government in identifying gaps and proposing appropriate actions. These could include adapting the mandates of existing regulators or providing new legislative measures if necessary. Although Canada’s federal government has labelled its approach to AI regulation as ‘agile’, it is clear that the UK approach is much closer to the concept of agile regulation. Encouraging existing regulators to adapt the stated AI principles to their remit and to provide guidance on how they will actualize these principles will allow them to move quickly, so long as there are no obvious gaps in legal authority. By contrast, even once passed, it will take at least two years for Canada’s AIDA to have its normative blanks filled in by regulations. And, even if regulations might be somewhat easier to update than statutes, guidance is even more responsive, giving regulators greater room to manoeuvre in a changing technological landscape. Embracing the precepts of agile regulation, the UK scheme emphasizes the need to gather data about the successes and failures of regulation itself in order to adapt as required. On the other hand, while empowering (and resourcing) existing regulators will have clear benefits in terms of agility, the regulatory gaps could well be important ones – with the governance of large language models such as ChatGPT as one example. While privacy regulators are beginning to flex their regulatory muscles in the direction of ChatGPT, data protection law will only address a subset of the issues raised by this rapidly evolving technology. In Canada, AIDA’s governance requirements will be specific to risk-based regulation of AI, and will apply to all those who design, develop or make AI systems available for use (unless of course they are explicitly excluded under one of the many actual and potential exceptions). Of course, the scheme in the AIDA may end up as more of a hybrid between the EU and the UK approaches in that the definition of “high impact” AI (to which the AIDA will apply) may be shaped not just by the degree of impact of the AI system at issue but also by the existence of other suitable regulatory frameworks. In other words, the companion document suggests that some existing regulators (health, consumer protection, human rights, financial institutions) have already taken steps to extend their remit to address the use of AI technologies within their spheres of competence. In this regard, the companion document speaks of “regulatory gaps that must be filled” by a statute such as AIDA as well as the need for the AIDA to integrate “seamlessly with existing Canadian legal frameworks”. Although it is still unclear whether the AIDA will serve only to fill regulatory gaps, or will provide two distinct layers of regulation in some cases, one of the criteria for identifying what constitutes a “high impact” system includes “[t]he degree to which the risks are adequately regulated under another law”. The lack of clarity in the Canadian approach is one of its flaws. There is a certain attractiveness in the idea of a regulatory approach like that proposed by the UK – one that begins with existing regulators being both specifically directed and further enabled to address AI regulation within their areas of responsibility. As noted earlier, it seems far more agile than Canada’s rather clunky bill. Yet such an approach is much easier to adopt in a unitary state than in a federal system such as Canada’s. In Canada, some of the regulatory gaps are with respect to matters otherwise under provincial jurisdiction. Thus, it is not so simple in Canada to propose to empower and resource all implicated regulators, nor is it as easy to fill gaps once they are identified. These regulators and the gaps between them might fall under the jurisdiction of any one of 13 different governments. The UK acknowledges (and defers) its own challenges in this regard with respect to devolution at paragraph 113 of its white paper, where it states: “We will continue to consider any devolution impacts of AI regulation as the policy develops and in advance of any legislative action”. Instead, the AIDA, Canada leverages its general trade and commerce power in an attempt to provide AI governance that is as comprehensive as possible. It isn’t pretty (since it will not capture all AI innovation that might have impacts on people) but it is part of the reality of the federal state (or the state of federalism) in which we find ourselves.
Published in
Privacy
Tuesday, 11 October 2022 03:43
Regulating AI in Canada - The Federal Government and the AIDA
This post is the fifth in a series on Canada’s proposed Artificial Intelligence and Data Act in Bill C-27. It considers the federal government’s constitutional authority to enact this law, along with other roles it might have played in regulating AI in Canada. Earlier posts include ones on the purpose and application of the AIDA; regulated activities; the narrow scope of the concepts of harm and bias in the AIDA and oversight and protection.
AI is a transformative technology that has the power to do amazing things, but which also has the potential to cause considerable harm. There is a global clamour to regulate AI in order to mitigate potential negative effects. At the same time, AI is seen as a driver of innovation and economies. Canada’s federal government wants to support and nurture Canada’s thriving AI sector while at the same time ensuring that there is public trust in AI. Facing similar issues, the EU introduced a draft AI Act, which is currently undergoing public debate and discussion (and which itself was the product of considerable consultation). The US government has just proposed its Blueprint for an AI Bill of Rights, and has been developing policy frameworks for AI, including the National Institute of Standards and Technology (NIST) Risk Management Framework. The EU and the US approaches are markedly different. Interestingly, in the US (which, like Canada, is a federal state) there has been considerable activity at the state level on AI regulation. Serious questions for Canada include what to do about AI, how best to do it – and who should do it. In June 2022, the federal government introduced the proposed Artificial Intelligence and Data Act (AIDA) in Bill C-27. The AIDA takes the form of risk regulation; in other words, it is meant to anticipate and mitigate AI harms to the public. This is an ex ante approach; it is intended to address issues before they become problems. The AIDA does not provide personal remedies or recourses if anyone is harmed by AI – this is left for ex post regimes (ones that apply after harm has occurred). These will include existing recourses such as tort law (extracontractual civil liability in Quebec), and complaints to privacy, human rights or competition commissioners. I have addressed some of the many problems I see with the AIDA in earlier posts. Here, I try to unpack issues around the federal government’s constitutional authority to enact this bill. It is not so much that they lack jurisdiction (although they might); rather, how they understand their jurisdiction can shape the nature and substance of the bill they are proposing. Further, the federal government has acted without any consultation on the AIDA prior to its surprising insertion in Bill C-27. Although it promises consultation on the regulations that will follow, this does not make up for the lack of discussion around how we should identify and address the risks posed by AI. This rushed bill is also shaped by constitutional constraints – it is AI regulation with structural limitations that have not been explored or made explicit. Canada is a federal state, which means that the powers typically exercised by a nation state are divided between a federal and regional governments. In theory, federalism allows for regional differences to thrive within an overarching framework. However, some digital technology issues (including data protection and AI) fit uneasily within Canada’s constitutional framework. In proposing the Consumer Privacy Protection Act part of Bill C-27, for example, the federal government appears to believe that it does not have the jurisdiction to address data protection as a matter of human rights – this belief has impacted the substance of the bill. In Canada, the federal government has jurisdiction over criminal law, trade and commerce, banking, navigation and shipping, as well as other areas where it makes more sense to have one set of rules than to have ten. The cross-cutting nature of AI, the international competition to define the rules of the game, and the federal government’s desire to take a consistent national approach to its regulation are all factors that motivated the inclusion of the AIDA in Bill C-27. The Bill’s preamble states that “the design, development and deployment of artificial intelligence systems across provincial and international borders should be consistent with national and international standards to protect individuals from potential harm”. Since we do not yet have national or international standards, the law will also enable the creation (and imposition) of standards through regulation. The preamble’s reference to the crossing of borders signals both that the federal government is keenly aware of its constitutional limitations in this area and that it intends to base its jurisdiction on the interprovincial and international dimensions of AI. The other elements of Bill C-27 rely on the federal general trade and commerce power – this follows the approach taken in the Personal Information Protection and Electronic Documents Act (PIPEDA), which is reformed by the first two parts of C-27. There are indications that trade and commerce is also relevant to the AIDA. Section 4 of the AIDA refers to the goal of regulating “international and interprovincial trade and commerce in artificial intelligence systems by establishing common requirements applicable across Canada, for the design, development and use of those systems.” Yet the general trade and commerce power is an uneasy fit for the AIDA. The Supreme Court of Canada has laid down rules for the exercise of this power, and one of these is that it should not be used to regulate a single industry; a legislative scheme should regulate trade as a whole. The Minister of Industry, in discussing Canada’s AI strategy has stated: Artificial intelligence is a key part of our government’s plan to make our economy stronger than ever. The second phase of the Pan-Canadian Artificial Intelligence Strategy will help harness the full potential of AI to benefit Canadians and accelerate trustworthy technology development, while fostering diversity and cooperation across the AI domain. This collaborative effort will bring together the knowledge and expertise necessary to solidify Canada as a global leader in artificial intelligence and machine learning. Clearly, the Minister is casting the role of AI as an overall economic transformer rather than a discrete industry. Nevertheless, although it might be argued that AI is a technology that cuts across all sectors of the economy, the AIDA applies predominantly to its design and development stages, which makes it look as if it targets a particular industry. Further, although PIPEDA (and the CPPA in the first Part of Bill C-27), are linked to trade and commerce through the transactional exchange of personal data – typically when it is collected from individuals in the course of commercial activity – the AIDA is different. Its regulatory requirements are meant to apply before any commercial activity takes place –at the design and development stage. This is worth pausing over because design and development stages may be non-commercial (in university-based research, for example) or may be purely intra-provincial. As a result, the need to comply with a law at the design and development stage, when that law is premised on interprovincial or international commercial activity, may only be discovered well after commercialization becomes a reality. Arguably, AI might also be considered a matter of ‘national concern’ under the federal government’s residual peace, order and good government power. Matters of national concern that would fall under this power would be ones that did not exist at the time of confederation. The problem with addressing AI in this way is that it is simply not obvious that provinces could not enact legislation to govern AI – as many states have begun to do in the US. Another possible constitutional basis is the federal criminal law power. This is used, for example, in the regulation of certain matters relating to health such as tobacco, food and drugs, medical devices and controlled substances. The Supreme Court of Canada has ruled that this power “is broad, and is circumscribed only by the requirements that the legislation must contain a prohibition accompanied by a penal sanction and must be directed at a legitimate public health evil”. The AIDA contains some prohibitions and provides for both administrative monetary penalties (AMPs). Because the AIDA focuses on “high impact” AI systems, there is an argument that it is meant to target and address those systems that have the potential to cause the most harm to health or safety. (Of course, the bill does not define “high impact” systems, so this is only conjecture.) Yet, although AMPs are available in cases of egregious non-compliance with the AIDA’s requirements, AMPs are not criminal sanctions, they are “a civil (rather than quasi-criminal) mechanism for enforcing compliance with regulatory requirements”, as noted in a report from the Ontario Attorney-General. That leaves a smattering of offences such as obstructing the work of the Minister or of auditors; knowingly designing, developing or using an AI system where the data were obtained as a result of an offence under another Act; being reckless as to whether the use of an AI system made available by the accused is likely to cause harm to an individual, and using AI intentionally to defraud the public and cause substantial economic loss to an individual. Certainly, such offences are criminal in nature and could be supported by the federal criminal law power. Yet they are easily severable from the rest of the statute. For the most part, the AIDA focuses on “establishing common requirements applicable across Canada, for the design, development and use of [AI] systems” (AIDA, s. 4). The provinces have not been falling over themselves to regulate AI, although neither have they been entirely inactive. Ontario, for example, has been developing a framework for the public sector use of AI, and Quebec has enacted some provisions relating to automated decision-making systems in its new data protection law. Nevertheless, these steps are clearly not enough to satisfy a federal government anxious to show leadership in this area. It is thus unsurprising that Canada’s federal government has introduced legislation to regulate AI. What is surprising is that they have done so without consultation – either regarding the form of the intervention or the substance. We have yet to have an informed national conversation about AI. Further, legislation of this kind was only one option. The government could have consulted and convened experts to develop something along the lines of the US’s NIST Framework that could be adopted as a common standard/approach across jurisdictions in Canada. A Canadian framework could have been supported by the considerable work on standards already ongoing. Such an approach could have involved the creation of an agency under the authority of a properly-empowered Data Commissioner to foster co-operation in the development of national standards. This could have supported the provinces in the harmonized regulation of AI. Instead, the government has chosen to regulate AI itself through a clumsy bill that staggers uneasily between constitutional heads of power, and that leaves its normative core to be crafted in a raft of regulations that may take years to develop. It also leaves it open to the first company to be hit with an AMP to challenge the constitutionality of the framework as a whole.
Published in
Privacy
Monday, 29 August 2022 08:05
Oversight and Enforcement Under Canada's Proposed AI and Data Act
The Artificial Intelligence and Data Act (AIDA) in Bill C-27 will create new obligations for those responsible for AI systems (particularly high impact systems), as well as those who process or make available anonymized data for use in AI systems. In any regulatory scheme that imposes obligations, oversight and enforcement are key issues. A long-standing critique of the Personal Information Protection and Electronic Documents Act (PIPEDA) has been that it is relatively toothless. This is addressed in the first part of Bill C-27, which reforms the data protection law to provide a suite of new enforcement powers that include order-making powers for the Privacy Commissioner and the ability to impose stiff administrative monetary penalties (AMPs). The AIDA comes with ‘teeth’ as well, although these teeth seem set within a rather fragile jaw. I will begin by identifying the oversight and enforcement powers (the teeth) and will then look at the agent of oversight and enforcement (the jaw). The table below sets out the main obligations accompanied by specific compliance measures. There is also the possibility that any breach of these obligations might be treated as either a violation or offence, although the details of these require elaboration in as-yet-to-be-drafted regulations.
Compliance with orders made by the Minister is mandatory (s. 19) and there is a procedure for them to become enforceable as orders of the Federal Court. Although the Minister is subject to confidentiality requirements, they may disclose any information they obtain through the exercise of the above powers to certain entities if they have reasonable grounds to believe that a person carrying out a regulated activity “has contravened, or is likely to contravene, another Act of Parliament or a provincial legislature” (s. 26(1)). Those entities include the Privacy Commissioner, the Canadian Human Rights Commission, the Commissioner of Competition, the Canadian Radio-television and Telecommunications Commission, their provincial analogues, or any other person prescribed by regulation. An organization may therefore be in violation of statutes other than AIDA and may be subject to investigation and penalties under those laws. The AIDA itself provides no mechanism for individuals to file complaints regarding any harms they may believe they have suffered, nor is there any provision for the investigation of complaints. The AIDA sets up the Minister as the actor responsible for oversight and enforcement, but the Minister may delegate any or all of their oversight powers to the new Artificial Intelligence and Data Commissioner who is created by s. 33. The Data Commissioner is described in the AIDA as “a senior official of the department over which the Minister presides”. They are not remotely independent. Their role is “to assist the Minister” responsible for the AIDA (most likely the Minister of Industry), and they will also therefore work in the Ministry responsible for supporting the Canadian AI industry. There is essentially no real regulator under the AIDA. Instead, oversight and enforcement are provided by the same group that drafted the law and that will draft the regulations. It is not a great look, and, certainly goes against the advice of the OECD on AI governance, as Mardi Wentzel has pointed out. The role of Data Commissioner had been first floated in the 2019 Mandate Letter to the Minister of Industry, which provided that the Minister would: “create new regulations for large digital companies to better protect people’s personal data and encourage greater competition in the digital marketplace. A newly created Data Commissioner will oversee those regulations.” The 2021 Federal Budget provided funding for the Data Commissioner, and referred to the role of this Commissioner as to “inform government and business approaches to data-driven issues to help protect people’s personal data and to encourage innovation in the digital marketplace.” In comparison with these somewhat grander ideas, the new AI and Data Commissioner role is – well – smaller than the title. It is a bit like telling your kids you’re getting them a deluxe bouncy castle for their birthday party and then on the big day tossing a couple of couch cushions on the floor instead. To perhaps add a gloss of some ‘independent’ input into the administration of the statute, the AIDA provides for the creation of an advisory committee (s. 35) that will provide the Minister with “advice on any matters related to this Part”. However, this too is a bit of a throwaway. Neither the AIDA nor any anticipated regulations will provide for any particular composition of the advisory committee, for the appointment of a chair with a fixed term, or for any reports by the committee on its advice or activities. It is the Minister who may choose to publish advice he receives from the committee on a publicly available website (s. 35(2)). The AIDA also provides for enforcement, which can take one of two routes. Well, one of three routes. One route is to do nothing – after all, the Minister is also responsible for supporting the AI industry in Canada– so this cannot be ruled out. A second option will be to treat a breach of any of the obligations specified in the as-yet undrafted regulations as a “violation” and impose an administrative monetary penalty (AMP). A third option is to treat a breach as an “offence” and proceed by way of prosecution (s. 30). A choice must be made between proceeding via the AMP or the offense route (s. 29(3)). Providing false information and obstruction are distinct offences (s. 30(2)). There are also separate offences in ss. 38 and 39 relating to the use of illegally obtained data and knowingly or recklessly making an AI system available for use that is likely to cause harm. Administrative monetary penalties under Part 1 of Bill C-27 (relating to data protection) are quite steep. However, the necessary details regarding the AMPs that will be available for breach of the AIDA are to be set out in regulations that have yet to be drafted (s. 29(4)(d)). All that the AIDA really tells us about these AMPs is that their purpose is “to promote compliance with this Part and not to punish” (s. 29(2)). Note that at the bottom of the list of regulation-making powers for AMPs set out in s. 29(4). This provision allows the Minister to make regulations “respecting the persons or classes of persons who may exercise any power, or perform any duty or function, in relation to the scheme.” There is a good chance that the AMPs will (eventually) be administered by the new Personal Information and Data Tribunal, which is created in Part 2 of Bill C-27. This, at least, will provide some separation between the Minister and the imposition of financial penalties. If this is the plan, though, the draft law should say so. It is clear that not all breaches of the obligations in the AIDA will be ones for which AMPs are available. Regulations will specify the breach of which provisions of the AIDA or its regulations will constitute a violation (s. 29(4)(a)). The regulations will also indicate whether the breach of the particular obligation is classified as minor, serious or very serious (s. 29(4)(b)). The regulations will also set out how any such proceedings will unfold. As-yet undrafted regulations will also specify the amounts or ranges of AMPS, and factors to take into account in imposing them. This lack of important detail makes it hard not to think of the oversight and enforcement scheme in the AIDA as a rough draft sketched out on a cocktail napkin after an animated after-hours discussion of what enforcement under the AIDA should look like. Clearly, the goal is to be ‘agile’, but ‘agile’ should not be confused with slapdash. Parliament is being asked to enact a law that leaves many essential components undefined. With so much left to regulations, one wonders whether all the missing pieces can (or will) be put in place within this decade. There are instances of other federal laws left incomplete by never-drafted regulations. For example, we are still waiting for the private right of action provided for in Canada’s Anti-Spam Law, which cannot come into effect until the necessary regulations are drafted. A cynic might even say that failing to draft essential regulations is a good way to check the “enact legislation on this issue” box on the to-do list, without actually changing the status quo.
Published in
Privacy
Monday, 22 August 2022 06:51
The unduly narrow scope for "harm" and "biased output" under the AIDA
This is the third in my series of posts on the Artificial Intelligence and Data Act (AIDA) found in Bill C-27, which is part of a longer series on Bill C-27 generally. Earlier posts on the AIDA have considered its purpose and application, and regulated activities. This post looks at the harms that the AIDA is designed to address. The proposed Artificial Intelligence and Data Act (AIDA), which is the third part of Bill C-27, sets out to regulate ‘high-impact’ AI systems. The concept of ‘harm’ is clearly important to this framework. Section 4(b) of the AIDA states that a purpose of the legislation is “to prohibit certain conduct in relation to artificial intelligence systems that may result in serious harm to individuals or harm to their interests”. Under the AIDA, persons responsible for high-impact AI systems have an obligation to identify, assess, and mitigate risks of harm or biased output (s. 8). Those persons must also notify the Minister “as soon as feasible” if a system for which they are responsible “results or is likely to result in material harm”. There are also a number of oversight and enforcement functions that are triggered by harm or a risk of harm. For example, if the Minister has reasonable grounds to believe that a system may result in harm or biased output, he can demand the production of certain records (s. 14). If there is a serious risk of imminent harm, the Minister may order a person responsible to cease using a high impact system (s. 17). The Minister is also empowered to make public certain information about a system where he believes that there is a serious risk of imminent harm and the publication of the information is essential to preventing it (s. 28). Elevated levels of harm are also a trigger for the offence in s. 39, which involves “knowing or being reckless as to whether the use of an artificial intelligence system is likely to cause serious physical or psychological harm to an individual or substantial damage to an individual’s property”. ‘Harm’ is defined in s. 5(1) to mean: (a) physical or psychological harm to an individual; (b) damage to an individual’s property; or (c) economic loss to an individual. I have emphasized the term “individual” in this definition because it places an important limit on the scope of the AIDA. First, it is unlikely that the term ‘individual’ includes a corporation. Typically, the word ‘person’ is considered to include corporations, and the word ‘person’ is used in this sense in the AIDA. This suggests that “individual” is meant to have a different meaning. The federal Interpretation Act is silent on the issue. It is a fair interpretation of the definition of ‘harm’ that “individual” is not the same as “person”, and means an individual (human) person. The French version uses the term “individu”, and not “personne”. The harms contemplated by this legislation are therefore to individuals and not to corporations. Defining harm in terms of individuals has other ramifications. The AIDA defines high-risk AI systems in terms of their impacts on individuals. Importantly, this excludes groups and communities. It also very significantly focuses on what are typically considered quantifiable harms, and uses language that suggests quantifiability (economic loss, damage to property, physical or psychological harm). Some important harms may be difficult to establish or to quantify. For example, class action lawsuits relating to significant data breaches have begun to wash up on the beach of lost causes due to the impossibility of proving material loss either because, although thousands may have been impacted, the individual losses are impossible to quantify, or because it is impossible to prove a causal link between very real identity theft and that particular data breach. Consider an AI system that manipulates public opinion through an algorithm that drives content to individuals based on its shock value rather than its truth. Say this happens during a pandemic and it convinces people that they should not get vaccinated or take other recommended public health measures. Say some people die because they were misled in this way. Say other people die because they were exposed to infected people who were misled in this way. How does one prove the causal link between the physical harm of injury or death of an individual and the algorithm? What if there is an algorithm that manipulates voter sentiment in a way that changes the outcome of an election? What is the quantifiable economic loss or psychological harm to any individual? How could causation be demonstrated? The harm, once again, is collective. The EU AI Act has also been criticized for focusing on individual harm, but the wording of that law is still broader than that in the AIDA. The EU AI Act refers to high-risk systems in terms of “harm to the health and safety or a risk of adverse impact on fundamental rights of persons”. This at least introduces a more collective dimension, and it avoids the emphasis on quantifiability. The federal government’s own Directive on Automated Decision-Making (DADM) which is meant to guide the development of AI used in public sector automated decision systems (ADS) also takes a broader approach to impact. In assessing the potential impact of an ADS, the DADM takes into account: “the rights of individuals or communities”, “the health or well-being of individuals or communities”, “the economic interests of individuals, entities, or communities”, and “the ongoing sustainability of an ecosystem”. With its excessive focus on individuals, the AIDA is simply tone deaf to the growing global understanding of collective harm caused by the use of human-derived data in AI systems. One response of the government might be to point out that the AIDA is also meant to apply to “biased output”. Biased output is defined in the AIDA as: content that is generated, or a decision, recommendation or prediction that is made, by an artificial intelligence system and that adversely differentiates, directly or indirectly and without justification, in relation to an individual on one or more of the prohibited grounds of discrimination set out in section 3 of the Canadian Human Rights Act, or on a combination of such prohibited grounds. It does not include content, or a decision, recommendation or prediction, the purpose and effect of which are to prevent disadvantages that are likely to be suffered by, or to eliminate or reduce disadvantages that are suffered by, any group of individuals when those disadvantages would be based on or related to the prohibited grounds. (s. 5(1)) [my emphasis] The argument here will be that the AIDA will also capture discriminatory biases in AI. However, I have underlined the part of this definition that once again returns the focus to individuals, rather than groups. It can be very hard for an individual to demonstrate that a particular decision discriminated against them (especially if the algorithm is obscure). In any event, biased AI will tend to replicate systemic discrimination. Although it will affect individuals, it is the collective impact that is most significant – and this should be recognized in the law. The somewhat obsessive focus on individual harm in the AIDA may unwittingly help perpetuate denials of systemic discrimination. It is also important to note that the definition of “harm” does not include “biased output”, and while the terms are used in conjunction in some cases (for example, in s. 8’s requirement to “identify, assess and mitigate the risks of harm or biased output”), other obligations relate only to “harm”. Since the two are used conjunctively in some parts of the statute, but not others, a judge interpreting the statute might presume that when only one of the terms is used, then it is only that term that is intended. Section 17 of the AIDA allows the Minister to order a person responsible for a high-impact system to cease using it or making it available if there is a “serious risk of imminent harm”. Section 28 permits the Minister to order the publication of information related to an AI system where there are reasonable grounds to believe that the use of the system gives rise to “a serious risk of imminent harm”. In both cases, the defined term ‘harm’ is used, but not ‘biased output’. The goals of the AIDA to protect against harmful AI are both necessary and important, but in articulating the harm that it is meant to address, the Bill underperforms.
Published in
Privacy
Monday, 15 August 2022 08:27
Regulated Activities and Data Under Bill C-27's AI and Data Act
This is the second in a series of posts on Bill C-27’s proposed Artificial Intelligence and Data Act (AIDA). The first post looked at the scope of application of the AIDA. This post considers what activities and what data will be subject to governance. Bill C-27’s proposed Artificial Intelligence and Data Act (AIDA) governs two categories of “regulated activity” so long as they are carried out “in the course of international or interprovincial trade and commerce”. These are set out in s. 5(1): (a) processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system; (b) designing, developing or making available for use an artificial intelligence system or managing its operations. These activities are cast in broad terms, capturing activities related both to the general curating of the data that fuel AI, and the design, development, distribution and management of AI systems. The obligations in the statute do not apply universally to all engaged in the AI industry. Instead, different obligations apply to those performing different roles. The chart below identifies the actor in the left-hand column, and the obligation the column on the right.
For most of these provisions, the details of what is actually required by the identified actor will depend upon regulations that have yet to be drafted. A “person responsible” for an AI system is defined in s. 5(2) of the AIDA in these terms: 5(2) For the purposes of this Part, a person is responsible for an artificial intelligence system, including a high-impact system, if, in the course of international or interprovincial trade and commerce, they design, develop or make available for use the artificial intelligence system or manage its operation. Thus, the obligations in ss. 7, 8, 9, 10 and 11, apply only to those engaged in the activities described in s. 5(1)(b) (designing, developing or making available an AI system or managing its operation). Further, it is important to note that with the exception of sections 6 and 7, the obligations in the AIDA also apply only to ‘high impact’ systems. The definition of a high-impact system has been left to regulations and is as yet unknown. Section 6 stands out somewhat as a distinct obligation relating to the governance of data used in AI systems. It applies to a person who carries out a regulated activity and who “processes or makes available for use anonymized data in the course of that activity”. Of course, the first part of the definition of a regulated activity includes someone who processes or makes available for use “any data relating to human activities for the purpose of designing, developing or using” an AI system. So, this obligation will apply to anyone “who processes or makes available for use anonymized data” (s. 6) in the course of “processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system” (s. 5(1)). Basically, then for s. 6 to apply, the anonymized data must be processed for the purposes of development of an AI system. All of this must also be in the course if international or interprovincial trade and commerce. Note that the first of these two purposes involves data “related to human activities” that are used in AI. This is interesting. The new Consumer Privacy Protection Act (CPPA) that forms the first part of Bill C-27 will regulate the collection, use and disclosure of personal data in the course of commercial activity. However, it provides, in s. 6(5), that: “For greater certainty, this Act does not apply in respect of personal information that has been anonymized.” By using the phrase “data relating to human activities” instead of “personal data”, s. 5(1) of the AIDA clearly addresses human-derived data that fall outside the definition of personal information in the CPPA because of anonymization. Superficially, at least, s. 6 of the AIDA appears to pick up the governance slack that arises where anonymized data are excluded from the scope of the CPPA. [See my post on this here]. However, for this to happen, the data have to be used in relation to an “AI system”, as defined in the legislation. Not all anonymized data will be used in this way, and much will depend on how the definition of an AI system is interpreted. Beyond that, the AIDA only applies to a ‘regulated activity’ which is one carried out in the course of international and inter-provincial trade and commerce. It does not apply outside the trade and commerce context, nor does it apply to any excluded actors [as discussed in my previous post here]. As a result, there remain clear gaps in the governance of anonymized data. Some of those gaps might (eventually) be filled by provincial governments, and by the federal government with respect to public-sector data usage. Other gaps – e.g., with respect to anonymized data used for purposes other than AI in the private sector context – will remain. Further, governance and oversight under the proposed CPPA will be by the Privacy Commissioner of Canada, an independent agent of Parliament. Governance under the AIDA (as will be discussed in a forthcoming post) is by the Minister of Industry and his staff, who are also responsible for supporting the AI industry in Canada. Basically, the treatment of anonymized data between the CPPA and the AIDA creates a significant governance gap in terms of scope, substance and process. On the issue of definitions, it is worth making a small side-trip into ‘personal information’. The definition of ‘personal information’ in the AIDA provides that the term “has the meaning assigned by subsections 2(1) and (3) of the Consumer Privacy Protection Act.” Section 2(1) is pretty straightforward – it defines “personal information” as “information about an identifiable individual”. However, s. 2(3) is more complicated. It provides: 2(3) For the purposes of this Act, other than sections 20 and 21, subsections 22(1) and 39(1), sections 55 and 56, subsection 63(1) and sections 71, 72, 74, 75 and 116, personal information that has been de-identified is considered to be personal information. The default rule for ‘de-identified’ personal information is that it is still personal information. However, the CPPA distinguishes between ‘de-identified’ (pseudonymized) data and anonymized data. Nevertheless, for certain purposes under the CPPA – set out in s. 2(3) – de-identified personal information is not personal information. This excruciatingly-worded limit on the meaning of ‘personal information’ is ported into the AIDA, even though the statutory provisions referenced in s. 2(3) are neither part of AIDA nor particularly relevant to it. Since the legislator is presumed not to be daft, then this must mean that some of these circumstances are relevant to the AIDA. It is just not clear how. The term “personal information” is used most significantly in the AIDA in the s. 38 offense of possessing or making use of illegally obtained personal information. It is hard to see why it would be relevant to add the CPPA s. 2(3) limit on the meaning of ‘personal information’ to this offence. If de-identified (not anonymized) personal data (from which individuals can be re-identified) are illegally obtained and then used in AI, it is hard to see why that should not also be captured by the offence.
Published in
Privacy
Monday, 08 August 2022 07:58
Canada's Proposed AI & Data Act - Purpose and Application
This is the first of a series of posts on the part of Bill C-27 that would enact a new Artificial Intelligence and Data Act (AIDA) in Canada. Previous posts have considered the part of the bill that would reform Canada’s private sector data protection law. This series on the AIDA begins with an overview of its purpose and application. Bill C-27 contains the text of three proposed laws. The first is a revamped private sector data protection law. The second would establish a new Data Tribunal that is assigned a role under the data protection law. The third is a new Artificial Intelligence and Data Act (AIDA) While the two other components were present in the bill’s failed predecessor Bill C-11, the AIDA is new – and for many came as a bit of a surprise. The common thread, of course, is the government’s Digital Charter, which set out a series of commitments for building trust in the digital and data economy. The preamble to Bill C-27, as a whole, addresses both AI and data protection concerns. Where it addresses AI regulation directly, it identifies the need to harmonize with national and international standards for the development and deployment of AI, and the importance of ensuring that AI systems uphold Canadian values in line with the principles of international human rights law. The preamble also signals a need for a more agile regulatory framework – something that might go towards justifying why so much of the substance of AI governance in the AIDA has been left to the development of regulations. Finally, the preamble speaks of a need “to foster an environment in which Canadians can seize the benefits of the digital and data-driven economy and to establish a regulatory framework that supports and protects Canadian norms and values, including the right to privacy.” This, then, frames how AI regulation (and data protection) will work in Canada – an attempt to walk a tightrope between enabling fast-paced innovation and protecting norms, values and privacy rights. Regulating the digital economy has posed some constitutional (division of powers) challenges for the federal government, and these challenges are evident in the AIDA, particularly with respect to the scope of application of the law. Section 4 sets out the dual purposes of the legislation: (a) to regulate international and interprovincial trade and commerce in artificial intelligence systems by establishing common requirements, applicable across Canada, for the design, development and use of those systems; and (b) to prohibit certain conduct in relation to artificial intelligence systems that may result in serious harm to individuals or harm to their interests. By focusing on international and interprovincial trade and commerce, the government asserts its general trade and commerce jurisdiction, without treading on the toes of the provinces, who remain responsible for intra-provincial activities. Yet, this means that there will be important gaps in AI regulation. Until the provinces act, these will be with respect to purely provincial AI solutions, whether in the public or private sectors, and, to a large extent, AI in the not-for-profit sector. However, this could get complicated since the AIDA sets out obligations for a range of actors, some of which could include international or interprovincial providers of AI systems to provincial governments. The second purpose set out in s. 4 suggests that at least when it comes to AI systems that may result in serious harm, the federal jurisdiction over criminal law may be invoked. The AIDA creates a series of offences that could be supported by this power – yet, ultimately the offences relate to failures to meet the obligations that arise based on being engaged in a ‘regulated activity’, which takes one back to activities carried out in the course of international or interprovincial trade and commerce. The federal trade and commerce power thus remains the backbone of this bill. Although there would be no constitutional difficulties with the federal government exerting jurisdiction over its own activities, the AIDA specifically excludes its application to federal government institutions, as defined in the Privacy Act. Significantly, it also does not apply to products, services or activities that are under the control of the Minister of National Defence, the Canadian Security Intelligence Service, the Communications Security Establishment or any other person who is responsible for a federal or provincial department or agency that is prescribed by regulation. This means that the AIDA would not apply even to those AI systems developed by the private sector for any of the listed actors. The exclusions are significant, particularly since the AIDA seems to be focussed on the prevention of harm to individuals (more on this in a forthcoming post) and the parties excluded are ones that might well develop or commission the development of AI that could (seriously) adversely impact individuals. It is possible that the government intends to introduce or rely upon other governance mechanisms to ensure that AI and personal data are not abused in these contexts. Or not. In contrast, the EU’s AI Regulation addresses the perceived need for latitude when it comes to national defence via an exception for “AI systems developed or used exclusively for military purposes” [my emphasis]. This exception is nowhere near as broad as that in the AIDA, which excludes all “products, services or activities under the control of the Minister of National defence”. Note that the Department of National Defence (DND) made headlines in 2020 when it contracted for an AI application to assist in hiring; it also made headlines in 2021 over an aborted psyops campaign in Canada. There is no reason why non-military DND uses of AI should not be subject to governance. The government might justify excluding the federal public sector from governance under the AIDA on the basis that it is already governed by the Directive on Automated Decision-Making. This Directive applies to automated decision-making systems developed and used by the federal government, although there are numerous gaps in its application. For example, it does not apply to systems adopted before it took effect, it applies only to automated decision systems and not to other AI systems, and it currently does not apply to systems used internally (e.g., to govern public sector employees). It also does not have the enforcement measures that the AIDA has, and, since government systems could well be high-impact, this seems like a gap in governance. Consider in this respect the much-criticized ArriveCan App, designed for COVID-19 border screening and now contemplated for much broader use at border entries into Canada. The app has been criticized for its lack of transparency, and for the ‘glitch’ that sent inexplicable quarantine orders to potentially thousands of users. The ArriveCan app went through the DADM process, but clearly this is not enough to address governance issues. Another important limit on the application of the AIDA is that most of its obligations apply only to “high impact systems”. This term is defined in the legislation as “an artificial intelligence system that meets the criteria for a high-impact system that are established in regulations.” This essentially says that this crucial term in the Bill will mean what cabinet decides it will mean at some future date. It is difficult to fully assess the significance or impact of this statute without any sense of how this term will be defined. The only obligations that appear to apply more generally are the obligation in s. 6 regarding the anonymization of data used or intended for use in AI systems, and the obligation in s. 10 to keep records regarding the anonymization measures taken. By contrast, the EU’s AI Regulation applies to all AI systems. These fall into one of four categories: unacceptable risk, high-risk, limited risk, and low/minimal risk. Those systems that fall into the first category are banned. Those in the high-risk category are subject to the regulation’s most stringent requirements. Limited-risk AI systems need only meet certain transparency requirements and low-risk AI is essentially unregulated. Note that Canada’s approach to ‘agile’ regulation is to address only one category of AI systems – those that fall into the as-yet undefined category of high ‘impact’. It is unclear whether this is agile or supine. It is also not clear what importance should be given to the choice of the word ‘impact’ rather than ‘risk’. However, it should be noted that risk refers not just to actual but to potential harm, whereas ‘impact’ seems to suggest actual harm. Although one should not necessarily read too much into this choice of language, the fact that this important element is left to regulations means that Parliament will be asked to enact a law without understanding its full scope of application. This seems like a problem.
Published in
Privacy
|
Electronic Commerce and Internet Law in Canada, 2nd EditionPublished in 2012 by CCH Canadian Ltd. Intellectual Property for the 21st CenturyIntellectual Property Law for the 21st Century: Interdisciplinary Approaches |