Teresa Scassa - Blog

 

Ontario has just released its Beta principles for the ethical use of AI and data enhanced technologies in Ontario. These replace the earlier Alpha principles, and are revised based upon commentary and feedback on the Alpha version. Note that these principles are designed for use in relation to AI technologies adopted for the Ontario public sector.

Below you will find a comparison table I created to provide a quick glance at what has been changed since the previous version. I have flagged significant additions with italics in the column for the Beta version. I have also flagged some words or concepts that have disappeared in the Beta version by using strikethrough in the column with the Alpha version. I have focused on the principles, and have not flagged changes to the “Why it Matters” section of each principle.

One important change to note is that the Beta version now refers not just to technologies used to make decisions, but also technologies used to assist in decision-making.

 

 

Principles for Ethical Use [Alpha]

Principles for Ethical Use [Beta]

The alpha Principles for Ethical Use set out six points to align the use of data-driven technologies within government processes, programs and services with ethical considerations and values. Our team has undertaken extensive jurisdictional scans of ethical principles across the world, in particular the US the European Union and major research consortiums. The Ontario “alpha” principles complement the Canadian federal principles by addressing a gap concerning specificity. Ontario’s principles support our diverse economic ecosystem by not clashing with existing best practices, principles and frameworks. This approach references and harmonizes with known standards, principles and tools to create clarity rather than barriers for innovation that is safe, responsible and beneficial.

 

These Principles for Ethical Use set out six points to align the use of data enhanced technologies within government processes, programs and services with ethical considerations and values.

 

The Trustworthy AI team within Ontario’s Digital Service has undertaken extensive jurisdictional scans of ethical principles across the world, in particular New Zealand, the United States, the European Union and major research consortiums.

 

The Ontario “beta” principles complement the Canadian federal principles by addressing a gap concerning specificity. Ontario’s principles support our diverse economic ecosystem by not clashing with existing best practices, principles and frameworks. This approach references and harmonizes with known standards, principles and tools to create clarity rather than barriers for innovation that is safe, responsible and beneficial.

 

We’re in the early days of bringing these principles to life. We encourage you to adopt as much of the principles as possible, and to share your feedback with us. You can email This e-mail address is being protected from spambots. You need JavaScript enabled to view it for more details.

 

You can also check out the Transparency Guidelines (GitHub).

1. Transparent and Explainable

 

There must be transparent and responsible disclosure around data-driven technology like Artificial Intelligence (AI), automated decisions and machine learning (ML) systems to ensure that people understand outcomes and can discuss, challenge and improve them.

 

 

Where automated decision making has been used to make individualized and automated decisions about humans, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject should be available.

 

Why it Matters

 

There is no way to hold data-driven technologies accountable, particularly as they impact various historically disadvantaged groups if the public is unaware of the algorithms and automated decisions the government is making. Transparency of use must be accompanied with plain language explanations for the public to have access to and not just the technical or research community. For more on this, please consult the Transparency Guidelines.

 

1. Transparent and explainable

 

There must be transparent use and responsible disclosure around data enhanced technology like AI, automated decisions and machine learning systems to ensure that people understand outcomes and can discuss, challenge and improve them. This includes being open about how and why these technologies are being used.

 

When automation has been used to make or assist with decisions, a meaningful explanation should be made available. The explanation should be meaningful to the person requesting it. It should include relevant information about what the decision was, how the decision was made, and the consequences.

 

Why it matters

 

Transparent use is the key principle that helps enable other principles while building trust and confidence in government use of data enhanced technologies. It also encourages a dialogue between those using the technology and those who are affected by it.

 

Meaningful explanations are important because they help people understand and potentially challenge outcomes. This helps ensure decisions are rendered fairly. It also helps identify and reverse adverse impacts on historically disadvantaged groups.

 

For more on this, please consult the Transparency Guidelines.

 

2. Good and Fair

 

Data-driven technologies should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards to ensure a fair and just society.

 

Designers, policy makers and developers should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognized labor rights.

 

Why it matters

 

Algorithmic and machine learning systems evolve through their lifecycle and as such it is important for the systems in place and technologies to be good and fair at the onset, in their data inputs and throughout the lifecycle of use. The definitions of good and fair are intentionally vague to allow designers and developers to consider all of the users both directly and indirectly impacted by the deployment of an automated decision making system.

 

2. Good and fair

 

Data enhanced technologies should be designed and operated in a way throughout their life cycle that respects the rule of law, human rights, civil liberties, and democratic values. These include dignity, autonomy, privacy, data protection, non-discrimination, equality, and fairness.

 

Why it matters

 

Algorithmic and machine learning systems evolve through their lifecycle and as such it is important for the systems in place and technologies to be good and fair at the onset, in their data inputs and throughout the life cycle of use. The definitions of good and fair are intentionally broad to allow designers and developers to consider all of the users both directly and indirectly impacted by the deployment of an automated decision making system.

 

3. Safe

 

Data-driven technologies like AI and ML systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.

 

Designers and developers should implement mechanisms and safeguards, such as capacity for human determination and complete halt of the system operations, that are appropriate to the context and predetermined at initial deployment.

 


Why it matters

Creating safe data-driven technologies means embedding safeguards throughout the life cycle of the deployment of the algorithmic system. Automated algorithmic decisions can reflect and amplify undesirable patterns in the data they are trained on. Despite our best efforts there will be unexpected outcomes and impacts. Systems will require ongoing monitoring and mitigation planning to ensure that if the algorithmic system is making decisions that are no longer agreeable that a human can adapt, correct or improve the system.

3. Safe

 

Data enhanced technologies like AI and ML systems must function in a safe and secure way throughout their life cycles and potential risks should be continually assessed and managed.

 

Designers, policy makers and developers should embed appropriate safeguards throughout the life cycle of the system to ensure it is working as intended. This would include mechanisms related to system testing, piloting, scaling and human intervention as well as alternative processes in case a complete halt of system operations is required. The mechanisms must be appropriate to the context and determined before deployment but should be iterated upon throughout the system’s life cycle.

 

Why it matters

Automated algorithmic decisions can reflect and amplify undesirable patterns in the data they are trained on. As well, issues with the system can arise that only become apparent after the system is deployed.

 

Therefore, despite our best efforts unexpected outcomes and impacts need to be considered. Accordingly, systems will require ongoing monitoring and mitigation planning to ensure that if the algorithmic system is making decisions that are not intended, a human can adapt, correct or improve the system.

 

4. Accountable and Responsible

 

Organizations and individuals developing, deploying or operating AI systems should be held accountable for their ongoing proper functioning in line with the above principles. Algorithmic systems should be periodically peer-reviewed or audited to ensure that unwanted biases have not inadvertently crept in over time.

 

Where AI is used to make decisions about individuals there needs to be a process for redress to better understand how a given decision was made.

 

Why it matters

 

In order for there to be accountability for decisions that are made by an AI or ML system a person, group of people or organization needs to be identified prior to deployment. This ensures that if redress is needed there is a preidentified entity that is responsible and can be held accountable for the outcomes of the algorithmic systems.

 

4. Accountable and responsible

 

Organizations and individuals developing, deploying or operating AI systems should be held accountable for their ongoing proper functioning in line with the other principles. Human accountability and decision making over AI systems within an organization needs to be clearly identified, appropriately distributed and actively maintained throughout the system’s life cycle. An organizational culture around shared ethical responsibilities over the system must also be promoted.

 

Where AI is used to make or assist with decisions, a public and accessible process for redress should be designed, developed, and implemented with input from a multidisciplinary team and affected stakeholders. Algorithmic systems should also be regularly peer-reviewed or audited to ensure that unwanted biases have not inadvertently crept in over time.

 

Why it matters

 

Identifying and appropriately distributing accountability within an organization helps ensure continuous human oversight over the system is properly maintained. In addition to clear roles related to accountability, it is also important to promote an organizational culture around shared ethical responsibilities. This helps prevent gaps and avoids the situation where ethical considerations are always viewed as someone else’s responsibility.

 

While our existing legal framework includes numerous traditional processes of redress related to governmental decision making, AI systems can present unique challenges to those traditional processes with their complexity. Input from a multidisciplinary team and affected stakeholders will help identify those issues in advance and design appropriate mechanisms to mitigate them.

 

Regular peer review of AI systems is also important. Issues around bias may not be evident when AI systems are initially designed or developed, so it's important to consider this requirement throughout the lifecycle of the system.

 

5. Human Centric

 

The processes and outcomes behind an algorithm should always be developed with human users as the main consideration. Human centered AI should reflect the information, goals, and constraints that a human decision-maker weighs when arriving at a decision.

 

Keeping human users at the center entails evaluating any outcomes (both direct and indirect) that might affect them due to the use of the algorithm. Contingencies for unintended outcomes need to be in place as well, including removing the algorithms entirely or ending their application.

 

Why it matters

 

Placing the focus on human user ensures that the outcomes do not cause adverse effects to users in the process of creating additional efficiencies.

 

In addition, Human-centered design is needed to ensure that you are able to keep a human in the loop when ensuring the safe operation of an algorithmic system. Developing algorithmic systems with the user in mind ensures better societal and economic outcomes from the data-driven technologies.

 

5. Human centric

 

AI systems should be designed with a clearly articulated public benefit that considers those who interact with the system and those who are affected by it. These groups should be meaningfully engaged throughout the system’s life cycle, to inform development and enhance operations. An approach to problem solving that embraces human centered design is strongly encouraged.

 

Why it matters

 

Clearly articulating a public benefit is an important step that enables meaningful dialogue early with affected groups and allows for measurement of success later.

 

Placing the focus on those who interact with the system and those who are affected by it ensures that the outcomes do not cause adverse effects in the process of creating additional efficiencies.

 

Developing algorithmic systems that incorporate human centred design will ensure better societal and economic outcomes from the data enhanced technologies.

 

6. Sensible and Appropriate

 

Data-driven technologies like AI or ML shall be developed with consideration of how it may apply to specific sectors or to individual cases and should align with the Canadian Charter of Human Rights and Freedoms and with Federal and Provincial AI Ethical Use.

 

Other biproducts of deploying data-driven technologies such as environmental, sustainability, societal impacts should be considered as they apply to specific sectors and use cases and applicable frameworks, best practices or laws.

 

Why it matters

 

Algorithmic systems and machine learning applications will differ by sector and user. As a result, while the above principles are a good starting point for developing ethical data-driven technologies it is important that additional considerations be given to the specific sectors and environments to which the algorithm is applied.

 

Experts in both technology and ethics should be consulted in development of data-driven technologies such as AI to guard against any adverse effects (including societal, environmental and other long-term effects).

6. Sensible and appropriate

 

Every data enhanced system exists not only within its use case, but also within a particular sector of society and a broader context that can feel its impact. Data enhanced technologies should be designed with consideration of how they may apply to a particular sector along with awareness of the broader context. This context could include relevant social or discriminatory impacts.

 

Why it matters

 

Algorithmic systems and machine learning applications will differ by sector. As a result, while the above principles are a good starting point for developing ethical data enhanced technologies it is important that additional considerations be given to the specific sectors to which the algorithm is applied.

 

Encouraging sector specific guidance also helps promote a culture of shared ethical responsibilities and a dialogue around the important issues raised by data enhanced systems.

 

Published in Privacy

 

The following is my submission to the Ontario government's Consultation on Developing Ontario's Artificial Intelligence (AI) Framework. The Consultation closed on June 4, 2021.


Thank you for the opportunity to provide input on the development of trustworthy AI in Ontario. Due to time pressures my comments will be relatively brief. Hopefully there will be other opportunities to engage with this process.

Developing a framework for the governance of AI in Ontario is important, and it is good to see that this work is underway in Ontario. I note that the current consultation focuses on AI for use in the public sector. Similar work needs to be done for the governance of AI that will be developed and deployed in the private sector context. I hope that this work is also being contemplated.

As I am sure you know, the federal government has already developed a Directive on Automated Decision-Making (DADM) which applies to a broad range of uses of AI in the federal public sector context. It comes with an algorithmic impact assessment tool. Although I appreciate the sensitivities around sovereignty within a province’s own spheres of competence, there is much to be said for more unified national approaches to many regulatory issues – particularly in the digital context. One option for Ontario is to use the DADM as a starting point for its approach to public sector AI governance, and to assess and adapt it for use in Ontario. This would allow Ontario to take advantage of an approach that is already well developed, and into which a considerable amount of thoughtful work has been invested. It is both unnecessary and counterproductive to reinvent the wheel. Serious consideration should be given – as a matter of public policy – to adopting, where possible, harmonized approaches to the governance of digital technologies.

At the same time, I note that the consultation document suggests that Ontario might go beyond a simple internal directive and actually provide an accountability framework that would give individuals direct recourse in cases where government does not meet whatever requirements are established. A public accountability framework is lacking in the federal DADM, and would be most welcome in Ontario.

The proposed public sector framework for Ontario is organized around three broad principles: No AI in secret; AI use Ontarians can trust; and AI that serves all Ontarians. These are good, if broad, principles. The real impact of this governance initiative will, of course, lie in its detail. However, it is encouraging to see a commitment to transparency, openness and public participation. It is also important that the government recognize the potential for AI to replicate or exacerbate existing inequities and to commit to addressing equity and inclusion.

My comments will address each of the principles in turn.

1. No AI in Secret

The consultation document states that “for people to trust that the use of AI is safe and appropriate they must first be aware that the AI exists. As a result, the government needs to be transparent about how, when, and why these tools are used so that people have a right to address potential biases created by the AI algorithms.” I agree. A public register of AI tools in use by government, along with access to details about these tools would be most welcome.

I do question, however, what is meant by “government” in this statement. In other words, I would be very interested to know more about the scope of what is being proposed. It was only a short while ago that we learned, for example, that police services in Ontario had made use of Clearview AI’s controversial facial recognition database. In some cases, it seems that senior ranks of the police may not even have been aware of this use. Ontario’s Privacy Commissioner at the time expressed concerns over this practice. This case raises important questions regarding the scope of the proposed commitment to transparency and AI. The first is whether police services will be included under government AI governance commitments – and if they are not, why not, and what measures will be put in place to govern AI used in the law enforcement context. It is also important to know what other agencies or departments will be excluded. A further question is whether AI-related commitments at the provincial level will be extended to municipalities, or whether they are intended only for use in the provincial public sector. Another question is whether the principles will only apply to AI developed within government or commissioned by government. In other words, will any law or guidance developed also apply to the myriad services that might be otherwise be available to government? For example, will new rules apply to the decision by a department to use the services of a human resources firm that makes use of AI in its recruitment processes? Will they apply to workplace monitoring software and productivity analytics services that might be introduced in the public service? On this latter point, I note it is unclear whether the commitment to AI governance relates only to AI that affects the general population as opposed to AI used to manage government employees. These issues of application and scope of any proposed governance framework are important.

2. Use Ontarian’s can Trust

The second guiding principle is “Use Ontarians can Trust”. The commitment is framed in these terms: “People building, procuring, and using AI have a responsibility to the people of Ontario that AI never puts people at risk and that proper guardrails are in place before the technology is used by the government.”

One of the challenges here is that there are so many types of AI and so many contexts in which AI can be used. Risk is inevitable -- and some of the risks may be of complex harms. In some cases, these harms may be difficult to foresee. The traffic predicting algorithm used as an illustration in this part of the consultation document has fairly clear-cut risk considerations. The main issue will be whether such an algorithm reduces the risk of serious accidents, for example. The risks from an algorithm that determines who is or is not eligible to receive social assistance benefits, on the other hand, will be much more complex. One significant risk will be that people who need the benefit will not receive it. Other risks might include the exacerbation of existing inequalities, or even greater alienation in the face of a seemingly impersonal system. These risks are serious but some are intangible – they might be ignored, dismissed or underestimated. Virginia Eubanks and others have observed that experimentation with the use of AI in government tends to take place in the context of programs and services for the least empowered members of society. This is troubling. The concept of risk must be robust and multifaceted. Decisions about where to deploy AI must be equitable and unbiased – not just the AI.

One of the initial recommendations in this section is to propose “ways to update Ontario’s rules, laws and guidance to strengthen the governance of AI, including whether to adopt a risk-based approach to determine when which rules apply.” I agree that work needs to be done to update Ontario’s legal frameworks in order to better address the challenges of AI. Data protection and human rights are two obvious areas where legislative reform may be necessary. It will also be important for those reforms to be accompanied by the necessary resources to handle the complex cases likely to be generated by AI. If legal protections and processes are enhanced without additional resources, the changes will be meaningless. It may also be necessary to consider establishing a regulatory authority for AI that could provide the governance, oversight and accountability specifically required by AI systems, and that could develop the necessary expertise. Challenging algorithmic decision-making will not be easy for ordinary Ontarians. They will need expert assistance and guidance for any challenge that goes beyond asking for an explanation or a reconsideration of the decision. A properly-resourced oversight body can provide this assistance and can develop necessary expertise to assist those who develop and implement AI.

3. AI that Serves all Ontarians

The overall goal for this commitment is to ensure that “Government use of AI reflects and protects the rights and values of Ontarians.” The values that are identified are equity and inclusion, as well as accountability.

As noted above, there is a tendency to deploy AI systems in ways that impact the most disadvantaged. AI systems are in use in the carceral context, they are used for the administration of social benefits programs, and so on. The very choices as to where to start experimenting with AI are ones that have significant impact. In these contexts, the risks of harm may be quite significant, but the populations impacted may feel most disempowered when it comes to challenging decisions or seeking recourse. This part of the consultation document suggests as a potential action the need to “Assess whether the government should prohibit the use of AI in certain use cases where vulnerable populations are at an extremely high risk.” While there likely are contexts in which a risk-based approach would warrant an early ban on AI until the risks can properly addressed, beyond bans, there should also be deliberation about how to use AI in contexts in which individuals are vulnerable. This might mean not rushing to experiment with AI in these areas until we have built a more robust accountability and oversight framework. It may also mean going slowly in certain areas – using only AI-assisted decision making, for example, and carefully studying and evaluating particular use cases.

 

In closing I would like to note as well the very thoughtful and thorough work being done by the Law Commission of Ontario on AI and Governance, which has a particular focus on the public sector. I hope that any policy development being done in this area will make good use of the Law Commission’s work.

Published in Privacy

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law