Teresa Scassa - Blog

Displaying items by tag: AI

 

Note: The following is my response to the call for submissions on the recommendations following the third review of Canada’s Directive on Automated Decision-Making. Comments are due by June 30, 2022. If you are interested in commenting, please consult the Review Report and the Summary of Key Issues and Proposed Amendments. Comments can be sent to This e-mail address is being protected from spambots. You need JavaScript enabled to view it .

 

The federal Directive on Automated Decision-Making (DADM) and its accompanying Algorithmic Impact Assessment tool (AIA) are designed to provide governance for the adoption and deployment of automated decision systems (ADS) by Canada’s federal government. Governments are increasingly looking to ADS in order to speed up routine decision-making processes and to achieve greater consistency in decision-making. At the same time, there are reasons to be cautious. Automated decision systems carry risks of incorporating and replicating discriminatory bias. They may also lack the transparency required of government decision-making, particularly where important rights or interests are at stake. The DADM, which has been in effect since April 2019 (with compliance mandatory no later than April 2020), sets out a series of obligations related to the design and deployment of automated decision-making systems. The extent of the obligations depends upon a risk assessment, and the AIA is the tool by which the level of risk of the system is assessed.

Given that this is a rapidly evolving area, the DADM provides that it will be reviewed every six months. It is now in its third review. The first two reviews led to the clarification of certain obligations in the DADM and to the development of guidelines to aid in its interpretation. This third review proposes a number of more substantive changes. This note comments on some of these changes and proposes an issue for future consideration.

Clarify and Broaden the Scope

A key recommendation in this third round of review relates to the scope of the DADM. Currently, the DADM applies only to ‘external’ services of government – in other words services offered to individuals or organizations by government. It does not apply internally. This is a significant gap when one considers the expanding use of ADS in the employment context. AI-enabled decision systems have been used in hiring processes, and they can be used to conduct performance reviews, and to make or assist in decision-making about promotions and internal workforce mobility. The use of AI tools in the employment context can have significant impacts on the lives and careers of employees. It seems a glaring oversight to not include such systems in the governance regime for ADM. The review team has recommended expanding the scope of the DADM to include internal as well as external services. They note that this move would also extend the DADM to any ADS used for “grants and contributions, awards and recognition, and security screening” (Report at 11). This is an important recommendation and one which should be implemented.

The review team also recommends a clarification of the language regarding the application of the DADM. Currently it puts within its scope “any system, tool, or statistical models used to recommend or make an administrative decision about a client”. Noting that “recommend” could be construed as including only those systems that recommend a specific outcome, as opposed to systems that process information on behalf of a decision-maker, the team proposes replacing “recommend” with “support”. This too is an important recommendation which should be implemented.

Periodic Reviews

Currently the DADM provides for its review every six months. This was always an ambitious review schedule. No doubt it was motivated by the fact that the DADM was a novel tool designed to address a rapidly emerging and evolving technology with potentially significant implications. The idea was to ensure that it was working properly and to promptly address any issues or problems. In this third review, however, the team recommends changing the review period from six months to two years. The rationale is that the six-month timetable makes it challenging for the team overseeing the DADM (which is constantly in a review cycle), and makes it difficult to properly engage stakeholders. They also cite the need for the DADM to “display a degree of stability and reliability, enabling federal institutions and the clients they serve to plan and act with a reasonable degree of confidence.” (Report at 12).

This too is a reasonable recommendation. While more frequent reviews were important in the early days of the DADM and the AIA, reviews every six months seem unduly burdensome once initial hiccups are resolved. A six-month review cycle engages the team responsible for the DADM in a constant cycle of review, which may not be the best use of resources. The proposed two-year review cycle would allow for a more experience to be garnered with the DADM and AIA, enabling a more substantive assessment of issues arising. Further, a two-year window is much more realistic if stakeholders are to be engaged in a meaningful way. Being asked to comment on reports and proposed changes every six months seems burdensome for anyone – including an already stretched civil society sector. The review document suggests that Canada’s Chief Information Officer could request completion of an off-cycle review if the need arose, leaving room for the possibility that a more urgent issue could be addressed outside of the two-year review cycle.

Data Model and Governance

The third review also proposes amendments to provide for what it describes as a more ‘holistic’ approach to data governance. Currently, the DADM focuses on data inputs – in other words on assessing the quality, relevance and timeliness of the data used in the model. The review report recommends the addition of an obligation to establish “measures to ensure that data used and generated by the Automated Decision System are traceable, protected, and appropriately retained and disposed of in accordance with the Directive on Service and Digital, Directive on Privacy Practices, and Directive on Security Management”. It will also recommend amendments to extend testing and assessment beyond data to underlying models, in order to assess both data and algorithms for bias or other problems. These are positive amendments which should be implemented.

Explanation

The review report notes that while the DADM requires “meaningful explanations” of how automated decisions were reached, and while guidelines provide some detail as to what is meant by explainability, there is still uncertainty about what explainability entails. The Report recommends adding language in Appendix C, in relation to impact assessment, that will set out the information necessary for ‘explainability’. This includes:

  • The role of the system in the decision-making process;
  • The training and client data, their source and method of collection, if applicable;
  • The criteria used to evaluate client data and the operations applied to process it; and
  • The output produced by the system and any relevant information needed to interpret it in the context of the administrative decision.

Again, this recommendation should be implemented.

Reasons for Automation

The review would also require those developing ADM systems for government to specifically identify why it was considered necessary or appropriate to automate the existing decision-making process. The Report refers to a “clear and demonstrable need”. This is an important additional criterion as it requires transparency as to the reasons for automation – and that these reasons go beyond the fact that vendor-demonstrated technologies look really cool. As the authors of the review note, requiring justification also helps to assess the parameters of the system adopted – particularly if the necessity and proportionality approach favoured by the Office of the Privacy Commissioner of Canada is adopted.

Transparency

The report addresses several issues that are relevant to the transparency dimensions of the DADM and the accompanying AIA. Transparency is an important element of the DADM, and it is key both to the legitimacy of the adoption of ADS by government, but also to its ongoing use. Without transparency in government decision-making that impacts individuals, organizations and communities, there can be no legitimacy. There are a number of transparency elements that are built into the DADM. For example, there are requirements to provide notice of automated decision systems, a right to an explanation of decisions that is tailored to the impact of the decision, and a requirement not just to conduct an AIA, but to publish the results. The review report includes a number of recommendations to improve transparency. These include a recommendation to clarify when an AIA must be completed and released, greater transparency around peer review results, more explicit criteria for explainability, and adding additional questions to the AIA. These are all welcome recommendations.

At least one of these recommendations may go some way to allaying my concerns with the system as it currently stands. The documents accompanying the report (slide 3 of summary document) indicate that there are over 300 AI projects across 80% of federal institutions. However, at the time of writing, only four AIAs were published on the open government portal. There is clearly a substantial lag between development of these systems and release of the AIAs. The recommendation that an AIA be not just completed but also released prior to the production of the system is therefore of great importance to ensuring transparency.

It may be that some of the discrepancy in the numbers is attributable to the fact that the DADM came into effect in 2020, and it was not grandfathered in for projects already underway. For transparency’s sake, I would also recommend that a public register of ADS be created that contains basic information about all government ADS. This could include their existence and function, as well as some transparency regarding explainability, the reasons for adoption, and measures taken to review, assess and ensure the reliability of these systems. Although it is too late, in the case of these systems, to perform a proactive AIA, there should be some form of reporting tool that can be used to provide important information, for transparency purposes, to the public.

Consideration for the Future

The next review of the DADM and the AIA should also involve a qualitative assessment of the AIAs that have been published to date. If the AIA is to be a primary tool not just for assessing ADS but for providing transparency about them, then they need to be good. Currently there is a requirement to conduct an AIA for a system within the scope of the DADM – but there is no explicit requirement for it to be of a certain quality. A quick review of the four AIAs currently available online shows some discrepancy between them in terms of the quality of the assessment. For example, the project description for one such system is an unhelpful 9-word sentence that does not make clear how AI is actually part of the project. This is in contrast to another that describes the project in a 14-line paragraph. These are clearly highly divergent in terms of the level of clarity and detail provided.

The first of these two AIAs also seems to contain contradictory answers to the AIA questionnaire. For example, the answer to the question “Will the system only be used to assist a decision-maker” is ‘yes’. Yet the answer to the question “Will the system be replacing a decision that would otherwise be made by a human” is also ‘yes’. Either one of these answers is incorrect, or the answers do not capture how the respondent interpreted these questions. These are just a few examples. It is easy to see how use of the AIA tool can range from engaged to pro forma.

The obligations imposed on departments with respect to ADS vary depending upon the risk assessment score. This score is evaluated through the questionnaire, and one of the questions asks “Are clients in this line of business particularly vulnerable?” In the AIA for an access to information (ATIP) tool, the answer given to this question is “no”. Of course, the description of the tool is so brief that it is hard to get a sense of how it functions. However, I would think that the clientele for an ATIP portal would be quite diverse. Some users will be relatively sophisticated (e.g., journalists or corporate users). Others will be inexperienced. For some of these, information sought may be highly important to them as they may be seeking access to government information to right a perceived wrong, to find out more about a situation that adversely impacts them, and so on. In my view, this assessment of the vulnerability of the clients is not necessarily accurate. Yet the answer provided contributes to a lower overall score and thus a lower level of accountability. My recommendation for the next round of reviews is to assess the overall effectiveness of the AIA tool in terms of the information and answers provided and in terms of their overall accuracy.

I note that the review report recommends adding questions to the AIA in order to improve the tool. Quite a number of these are free text answers, which require responses to be drafted by the party completing the AIA. Proposed questions include ones relating to the user needs to be addressed, how the system will meet those needs, and the effectiveness of the system in meeting those needs, along with reasons for this assessment. Proposed questions will also ask whether non-AI-enabled solutions were also considered, and if so, why AI was chosen as the preferred method. A further question asks what the consequences would be of not deploying the system. This additional information is important both to assessing the tool and to providing transparency. However, as noted above, the answers will need to be clear and sufficiently detailed in order to be of any use.

The AIA is crucial to assessing the level of obligation and to ensuring transparency. If AIAs are pro forma or excessively laconic, then the DADM can be as finely tuned as can be, but it will still not achieve desired results. The review committee’s recommendation that plain language summaries of peer review assessments also be published will provide a means of assessing the quality of the AIAs, and thus it is an important recommendation to strengthen both transparency and compliance.

A final issue that I would like to address is that, to achieve transparency, people will need to be able to easily find and access the information about the systems. Currently, AIAs are published on the Open Government website. There, they are listed alphabetically by title. This is not a huge problem right now, since there are only four of them. As more are published, it would be helpful to have a means of organizing them by department or agency, or by other criteria (including risk/impact score) to improve their findability and usability. Further, it will be important that any peer review summaries are linked to the appropriate AIAs. In addition to publication on the open government portal, links to these documents should be made available from department, agency or program websites. It would also be important to have an index or registry of AI in the federal sector – including not just those projects covered by the DADM, but also those in production prior to the DADM’s coming into force.

[Note: I have written about the DADM and the AIA from an administrative law perspective. My paper, which looks at the extent to which the DADM addresses administrative law concerns regarding procedural fairness, can be found here.]

Published in Privacy

 

Ontario has just released its Beta principles for the ethical use of AI and data enhanced technologies in Ontario. These replace the earlier Alpha principles, and are revised based upon commentary and feedback on the Alpha version. Note that these principles are designed for use in relation to AI technologies adopted for the Ontario public sector.

Below you will find a comparison table I created to provide a quick glance at what has been changed since the previous version. I have flagged significant additions with italics in the column for the Beta version. I have also flagged some words or concepts that have disappeared in the Beta version by using strikethrough in the column with the Alpha version. I have focused on the principles, and have not flagged changes to the “Why it Matters” section of each principle.

One important change to note is that the Beta version now refers not just to technologies used to make decisions, but also technologies used to assist in decision-making.

 

 

Principles for Ethical Use [Alpha]

Principles for Ethical Use [Beta]

The alpha Principles for Ethical Use set out six points to align the use of data-driven technologies within government processes, programs and services with ethical considerations and values. Our team has undertaken extensive jurisdictional scans of ethical principles across the world, in particular the US the European Union and major research consortiums. The Ontario “alpha” principles complement the Canadian federal principles by addressing a gap concerning specificity. Ontario’s principles support our diverse economic ecosystem by not clashing with existing best practices, principles and frameworks. This approach references and harmonizes with known standards, principles and tools to create clarity rather than barriers for innovation that is safe, responsible and beneficial.

 

These Principles for Ethical Use set out six points to align the use of data enhanced technologies within government processes, programs and services with ethical considerations and values.

 

The Trustworthy AI team within Ontario’s Digital Service has undertaken extensive jurisdictional scans of ethical principles across the world, in particular New Zealand, the United States, the European Union and major research consortiums.

 

The Ontario “beta” principles complement the Canadian federal principles by addressing a gap concerning specificity. Ontario’s principles support our diverse economic ecosystem by not clashing with existing best practices, principles and frameworks. This approach references and harmonizes with known standards, principles and tools to create clarity rather than barriers for innovation that is safe, responsible and beneficial.

 

We’re in the early days of bringing these principles to life. We encourage you to adopt as much of the principles as possible, and to share your feedback with us. You can email This e-mail address is being protected from spambots. You need JavaScript enabled to view it for more details.

 

You can also check out the Transparency Guidelines (GitHub).

1. Transparent and Explainable

 

There must be transparent and responsible disclosure around data-driven technology like Artificial Intelligence (AI), automated decisions and machine learning (ML) systems to ensure that people understand outcomes and can discuss, challenge and improve them.

 

 

Where automated decision making has been used to make individualized and automated decisions about humans, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject should be available.

 

Why it Matters

 

There is no way to hold data-driven technologies accountable, particularly as they impact various historically disadvantaged groups if the public is unaware of the algorithms and automated decisions the government is making. Transparency of use must be accompanied with plain language explanations for the public to have access to and not just the technical or research community. For more on this, please consult the Transparency Guidelines.

 

1. Transparent and explainable

 

There must be transparent use and responsible disclosure around data enhanced technology like AI, automated decisions and machine learning systems to ensure that people understand outcomes and can discuss, challenge and improve them. This includes being open about how and why these technologies are being used.

 

When automation has been used to make or assist with decisions, a meaningful explanation should be made available. The explanation should be meaningful to the person requesting it. It should include relevant information about what the decision was, how the decision was made, and the consequences.

 

Why it matters

 

Transparent use is the key principle that helps enable other principles while building trust and confidence in government use of data enhanced technologies. It also encourages a dialogue between those using the technology and those who are affected by it.

 

Meaningful explanations are important because they help people understand and potentially challenge outcomes. This helps ensure decisions are rendered fairly. It also helps identify and reverse adverse impacts on historically disadvantaged groups.

 

For more on this, please consult the Transparency Guidelines.

 

2. Good and Fair

 

Data-driven technologies should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards to ensure a fair and just society.

 

Designers, policy makers and developers should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognized labor rights.

 

Why it matters

 

Algorithmic and machine learning systems evolve through their lifecycle and as such it is important for the systems in place and technologies to be good and fair at the onset, in their data inputs and throughout the lifecycle of use. The definitions of good and fair are intentionally vague to allow designers and developers to consider all of the users both directly and indirectly impacted by the deployment of an automated decision making system.

 

2. Good and fair

 

Data enhanced technologies should be designed and operated in a way throughout their life cycle that respects the rule of law, human rights, civil liberties, and democratic values. These include dignity, autonomy, privacy, data protection, non-discrimination, equality, and fairness.

 

Why it matters

 

Algorithmic and machine learning systems evolve through their lifecycle and as such it is important for the systems in place and technologies to be good and fair at the onset, in their data inputs and throughout the life cycle of use. The definitions of good and fair are intentionally broad to allow designers and developers to consider all of the users both directly and indirectly impacted by the deployment of an automated decision making system.

 

3. Safe

 

Data-driven technologies like AI and ML systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.

 

Designers and developers should implement mechanisms and safeguards, such as capacity for human determination and complete halt of the system operations, that are appropriate to the context and predetermined at initial deployment.

 


Why it matters

Creating safe data-driven technologies means embedding safeguards throughout the life cycle of the deployment of the algorithmic system. Automated algorithmic decisions can reflect and amplify undesirable patterns in the data they are trained on. Despite our best efforts there will be unexpected outcomes and impacts. Systems will require ongoing monitoring and mitigation planning to ensure that if the algorithmic system is making decisions that are no longer agreeable that a human can adapt, correct or improve the system.

3. Safe

 

Data enhanced technologies like AI and ML systems must function in a safe and secure way throughout their life cycles and potential risks should be continually assessed and managed.

 

Designers, policy makers and developers should embed appropriate safeguards throughout the life cycle of the system to ensure it is working as intended. This would include mechanisms related to system testing, piloting, scaling and human intervention as well as alternative processes in case a complete halt of system operations is required. The mechanisms must be appropriate to the context and determined before deployment but should be iterated upon throughout the system’s life cycle.

 

Why it matters

Automated algorithmic decisions can reflect and amplify undesirable patterns in the data they are trained on. As well, issues with the system can arise that only become apparent after the system is deployed.

 

Therefore, despite our best efforts unexpected outcomes and impacts need to be considered. Accordingly, systems will require ongoing monitoring and mitigation planning to ensure that if the algorithmic system is making decisions that are not intended, a human can adapt, correct or improve the system.

 

4. Accountable and Responsible

 

Organizations and individuals developing, deploying or operating AI systems should be held accountable for their ongoing proper functioning in line with the above principles. Algorithmic systems should be periodically peer-reviewed or audited to ensure that unwanted biases have not inadvertently crept in over time.

 

Where AI is used to make decisions about individuals there needs to be a process for redress to better understand how a given decision was made.

 

Why it matters

 

In order for there to be accountability for decisions that are made by an AI or ML system a person, group of people or organization needs to be identified prior to deployment. This ensures that if redress is needed there is a preidentified entity that is responsible and can be held accountable for the outcomes of the algorithmic systems.

 

4. Accountable and responsible

 

Organizations and individuals developing, deploying or operating AI systems should be held accountable for their ongoing proper functioning in line with the other principles. Human accountability and decision making over AI systems within an organization needs to be clearly identified, appropriately distributed and actively maintained throughout the system’s life cycle. An organizational culture around shared ethical responsibilities over the system must also be promoted.

 

Where AI is used to make or assist with decisions, a public and accessible process for redress should be designed, developed, and implemented with input from a multidisciplinary team and affected stakeholders. Algorithmic systems should also be regularly peer-reviewed or audited to ensure that unwanted biases have not inadvertently crept in over time.

 

Why it matters

 

Identifying and appropriately distributing accountability within an organization helps ensure continuous human oversight over the system is properly maintained. In addition to clear roles related to accountability, it is also important to promote an organizational culture around shared ethical responsibilities. This helps prevent gaps and avoids the situation where ethical considerations are always viewed as someone else’s responsibility.

 

While our existing legal framework includes numerous traditional processes of redress related to governmental decision making, AI systems can present unique challenges to those traditional processes with their complexity. Input from a multidisciplinary team and affected stakeholders will help identify those issues in advance and design appropriate mechanisms to mitigate them.

 

Regular peer review of AI systems is also important. Issues around bias may not be evident when AI systems are initially designed or developed, so it's important to consider this requirement throughout the lifecycle of the system.

 

5. Human Centric

 

The processes and outcomes behind an algorithm should always be developed with human users as the main consideration. Human centered AI should reflect the information, goals, and constraints that a human decision-maker weighs when arriving at a decision.

 

Keeping human users at the center entails evaluating any outcomes (both direct and indirect) that might affect them due to the use of the algorithm. Contingencies for unintended outcomes need to be in place as well, including removing the algorithms entirely or ending their application.

 

Why it matters

 

Placing the focus on human user ensures that the outcomes do not cause adverse effects to users in the process of creating additional efficiencies.

 

In addition, Human-centered design is needed to ensure that you are able to keep a human in the loop when ensuring the safe operation of an algorithmic system. Developing algorithmic systems with the user in mind ensures better societal and economic outcomes from the data-driven technologies.

 

5. Human centric

 

AI systems should be designed with a clearly articulated public benefit that considers those who interact with the system and those who are affected by it. These groups should be meaningfully engaged throughout the system’s life cycle, to inform development and enhance operations. An approach to problem solving that embraces human centered design is strongly encouraged.

 

Why it matters

 

Clearly articulating a public benefit is an important step that enables meaningful dialogue early with affected groups and allows for measurement of success later.

 

Placing the focus on those who interact with the system and those who are affected by it ensures that the outcomes do not cause adverse effects in the process of creating additional efficiencies.

 

Developing algorithmic systems that incorporate human centred design will ensure better societal and economic outcomes from the data enhanced technologies.

 

6. Sensible and Appropriate

 

Data-driven technologies like AI or ML shall be developed with consideration of how it may apply to specific sectors or to individual cases and should align with the Canadian Charter of Human Rights and Freedoms and with Federal and Provincial AI Ethical Use.

 

Other biproducts of deploying data-driven technologies such as environmental, sustainability, societal impacts should be considered as they apply to specific sectors and use cases and applicable frameworks, best practices or laws.

 

Why it matters

 

Algorithmic systems and machine learning applications will differ by sector and user. As a result, while the above principles are a good starting point for developing ethical data-driven technologies it is important that additional considerations be given to the specific sectors and environments to which the algorithm is applied.

 

Experts in both technology and ethics should be consulted in development of data-driven technologies such as AI to guard against any adverse effects (including societal, environmental and other long-term effects).

6. Sensible and appropriate

 

Every data enhanced system exists not only within its use case, but also within a particular sector of society and a broader context that can feel its impact. Data enhanced technologies should be designed with consideration of how they may apply to a particular sector along with awareness of the broader context. This context could include relevant social or discriminatory impacts.

 

Why it matters

 

Algorithmic systems and machine learning applications will differ by sector. As a result, while the above principles are a good starting point for developing ethical data enhanced technologies it is important that additional considerations be given to the specific sectors to which the algorithm is applied.

 

Encouraging sector specific guidance also helps promote a culture of shared ethical responsibilities and a dialogue around the important issues raised by data enhanced systems.

 

Published in Privacy
Thursday, 07 February 2019 08:09

Ontario Launches Data Strategy Consultation

On February 5, 2019 the Ontario Government launched a Data Strategy Consultation. This comes after a year of public debate and discussion about data governance issues raised by the proposed Quayside smart cities development in Toronto. It also comes at a time when the data-thirsty artificial intelligence industry in Canada is booming – and hoping very much to be able to continue to compete at the international level. Add to the mix the view that greater data sharing between government departments and agencies could make government ‘smarter’, more efficient, and more user-friendly. The context might be summed up in these terms: the public is increasingly concerned about the massive and widespread collection of data by governments and the private sector; at the same time, both governments and the private sector want easier access to more and better data.

Consultation is a good thing – particularly with as much at stake as there is here. This consultation began with a press release that links to a short text about the data strategy, and then a link to a survey which allows the public to provide feedback in the form of answers to specific questions. The survey is open until March 7, 2019. It seems that the government will then create a “Minister’s Task Force on Data” and that this body will be charged with developing a draft data strategy that will be opened for further consultation. The overall timeline seems remarkably short, with the process targeted to wrap up by Fall 2019.

The press release telegraphs the government’s views on what the outcome of this process must address. It notes that 55% of Canada’s Big data vendors are located in Ontario, and that government plans “to make life easier for Ontarians by delivering simpler, faster and better digital services.” The goal is clearly to develop a data strategy that harnesses the power of data for use in both the private and public sectors.

If the Quayside project has taught anyone anything, it is that people do care about their data in the hands of both public and private sector actors. The press release acknowledges this by referencing the need for “ensuring that data privacy and protection is paramount, and that data will be kept safe and secure.” Yet perhaps the Ontario government has not been listening to all of the discussions around Quayside. While the press release and the introduction to the survey talk about privacy and security, neither document addresses the broader concerns that have been raised in the context of Quayside, nor those that are raised in relation to artificial intelligence more generally. There are concerns about bias and discrimination, transparency in algorithmic decision-making, profiling, targeting, and behavioural modification. Seamless sharing of data within government also raises concerns about mass surveillance. There is also a need to consider innovative solutions to data governance and the role the government might play in fostering or supporting these.

There is no doubt that the issues underlying this consultation are important ones. It is clear that the government intends to take steps to facilitate intra-governmental sharing of data as well as greater sharing of data between government and the private sector. It is also clear that much of that data will ultimately be about Ontarians. How this will happen, and what rights and values must be protected, are fundamental questions.

As is the case at the provincial and federal level across the country, the laws which govern data in Ontario were written for a different era. Not only are access to information and protection of privacy laws out of date, data-driven practices increasingly impact areas such as consumer protection, competition, credit reporting, and human rights. An effective data strategy might need to reach out across these different areas of law and policy.

Privacy and security – the issues singled out in the government’s documents – are important, but privacy must mean more than the narrow view of protecting identifiable individuals from identity theft. We need robust safeguards against undue surveillance, assurances that our data will not be used to profile or target us or our communities in ways that create or reinforce exclusion or disadvantage; we need to know how privacy and autonomy will be weighed in the balance against the stimulation of the economy and the encouragement of innovation. We also need to consider whether there are uses to which our data should simply not be put. Should some data be required to be stored in Canada, and if so in what circumstances? These and a host of other questions need to be part of the data strategy consultation. Perhaps a broader question might be why we are talking only about a data strategy and not a digital strategy. The approach of the government seems to focus on the narrow question of data as both an input and output – but not on the host of other questions around the digital technologies fueled by data. Such questions might include how governments should go about procuring digital technologies, the place of open source in government, the role and implication of technology standards – to name just a few.

With all of these important issues at stake, it is hard not to be disappointed by the form and substance of at least this initial phase of the government's consultation. It is difficult to say what value will be derived from the survey which is the vehicle for initial input. Some of the questions are frankly vapid. Consider question 2:

2. I’m interested in exploring the role of data in:

creating economic benefits

increasing public trust and confidence

better, smarter government

other

There is no box in which to write in what the “other” might be. And questions 9 to 11 provide sterling examples of leading questions:

9. Currently, the provincial government is unable to share information among ministries requiring individuals and businesses to submit the same information each time they interact with different parts of government. Do you agree that the government should be able to securely share data among ministries?

Yes

No

I’m not sure

10. Do you believe that allowing government to securely share data among ministries will streamline and improve interactions between citizens and government?

Yes

No

I’m not sure

11. If government made more of its own data available to businesses, this data could help those firms launch new services, products, and jobs for the people of Ontario. For example, government transport data could be used by startups and larger companies to help people find quicker routes home from work. Would you be in favour of the government responsibly sharing more of its own data with businesses, to help them create new jobs, products and services for Ontarians?

Yes

No

I’m not sure

In fairness, there are a few places in the survey where respondents can enter their own answers, including questions about what issues should be put to the task force and what skills and experience members should have. Those interested in data strategy should be sure to provide their input – both now and in the later phases to come.

Published in Privacy

A law suit filed in Montreal this summer raises novel copyright arguments regarding AI-generated works. The plaintiffs are artist Amel Chamandy and Galerie NuEdge Fine Arts (which sells and exhibits her art). They are suing artist Adam Basanta for copyright and trademark infringement. (The trademark infringement arguments are not discussed in this post). Mr Basanta is a world renowned new media artist who experiments with AI in his work. (See the Globe and Mail story by Chris Hannay on this law suit here).

According to a letter dated July 4, filed with the court, Mr. Basanta’s current project is “to explore connections between mass technologies, using those technologies themselves.” He explains his process in a video which can be found here. Essentially, he has created what he describes as an “art-factory” that randomly generates images without human input. The images created are then “analyzed by a series of deep-learning algorithms trained on a database of contemporary artworks in economic and institutional circulation” (see artist’s website). The images used in the database of artworks are found online. Where the analysis finds a match of more than 83% between one of the randomly generated images and an image in the database, the randomly generated image is presented online with the percentage match, the title of the painting it matches, and the artist’s name. This information is also tweeted out. The image of the painting that matches the AI image is not reproduced or displayed on the website or on Twitter.

One of Mr Basanta’s images was an 85.81% match with a painting by Ms Chamandy titled “Your World Without Paper”. This information was reported on Mr Basanta’s website and Twitter accounts along with the machine-generated image which resulted in the match.

The copyright infringement allegation is essentially that “the process used by the Defendant to compare his computer generated images to Amel Chamandy’s work necessarily required an unauthorized copy of such a work to be made.” (Statement of Claim, para 30). Ms Chamandy claims statutory damages of up to $20,000 for the commercial use of her work. Mr Basanta, for his part, argues that there is no display of Ms Chamandy’s work, and therefore no infringement.

AI has been generating much attention in the copyright world. AI algorithms need to be ‘trained’ and this training requires that they be fed a constant supply of text, data or images, depending upon the algorithm. Rights holders argue that the use of their works in this way without consent is infringement. The argument is that the process requires unauthorized copies to be fed into the system for algorithmic analysis. Debates have raged in the EU over a text-and-data mining exception to copyright infringement which would make this type of use of copyright protected works acceptable so long as it is for research purposes. Other uses would require clearance for a fee. There has already been considerable debate in Europe over whether research is a broad enough basis for the exception and what activities it would include. If a similar exception is to be adopted in Canada in the next round of copyright reform, we will face similar challenges in defining its boundaries.

Of course, the Chamandy case is not the conventional text and data mining situation. The copied image is not used to train algorithms. Rather, it is used in an analysis to assess similarities with another image. But such uses are not unknown in the AI world. Facial recognition technologies match live captured images with stored face prints. In this case, the third party artwork images are like the stored face prints. It is AI, just not the usual text and data mining paradigm. This should also raise questions about how to draft exceptions or to interpret existing exceptions to address AI-related creativity and innovation.

In the US, some argue that the ‘fair use’ exception to infringement is broad enough to support text and data mining uses of copyright protected works since the resulting AI output is transformative. Canada’s fair dealing provisions are less generous than U.S. fair use, but it is still possible to argue that text and data mining uses might be ‘fair’. Canadian law recognizes fair dealing for the purposes of research or private study, so if an activity qualifies as ‘research’ it might be fair dealing. The fairness of any dealing requires a contextual analysis. In this case the dealing might be considered fair since the end result only reports on similarities but does not reproduce any of the protected images for public view.

The problem, of course, with fair dealing defences is that each case turns on its own facts. The fact-dependent inquiry necessary for a fair dealing defense could be a major brake on innovation and creativity – either by dissuading uses out of fear of costly infringement claims or by driving up the cost of innovation by requiring rights clearance in order to avoid being sued.

The claim of statutory damages here is also interesting. Statutory damages were introduced in s. 38.1 of the Copyright Act to give plaintiffs an alternative to proving actual damage. For commercial infringements, statutory damages can range from $500 to $20,000 per work infringed; for non-commercial infringement the range is $100 to $5,000 for all infringements and all works involved. A judge’s actual award of damages within these ranges is guided by factors that include the need for deterrence, and the conduct of the parties. Ms Chamandy asserts that Mr Basanda’s infringement is commercial, even though the commercial dimension is difficult to see. It would be interesting to consider whether the enhancement of his reputation or profile as an artist or any increase in his ability to obtain grants would be considered “commercial”. Beyond the challenge of identifying what is commercial activity in this context, it opens a window into the potential impact of statutory damages in text and data mining activities. If such activities are considered to infringe copyright and are not clearly within an exception, then in Canada, a commercial text and data miner who consumes – say 500,000 different images to train an algorithm – might find themselves, even on the low end of the spectrum, liable for $250 million dollars in statutory damages. Admittedly, the Act contains a clause that gives a judge the discretion to reduce an award of statutory damages if it is “grossly out of proportion to the infringement”. However, not knowing what a court might do or by how much the damages might be reduced creates uncertainty that can place a chill on innovation.

Although in this case, there may well be a good fair dealing defence, the realities of AI would seem to require either a clear set of exceptions to clarify infringement issues, or some other scheme to compensate creators which expressly excludes resort to statutory damages. The vast number of works that might be consumed to train an algorithm for commercial purposes makes statutory damages, even at the low end of the scale, potentially devastating and creates a chill.

 

Published in Copyright Law
<< Start < Prev 1 2 Next > End >>
Page 2 of 2

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law