Tags
access to information
AI
AI governance
AI regulation
Ambush Marketing
artificial intelligence
big data
bill c11
Bill c27
copyright
data governance
data protection
data strategy
Electronic Commerce
freedom of expression
Geospatial
geospatial data
intellectual property
Internet
internet law
IP
open courts
open data
open government
personal information
pipeda
Privacy
smart cities
trademarks
transparency
|
Displaying items by tag: artificial intelligence
Friday, 04 June 2021 13:00
Submission to Consultation on Developing Ontario's Artificial Intelligence (AI) Framework
The following is my submission to the Ontario government's Consultation on Developing Ontario's Artificial Intelligence (AI) Framework. The Consultation closed on June 4, 2021. Thank you for the opportunity to provide input on the development of trustworthy AI in Ontario. Due to time pressures my comments will be relatively brief. Hopefully there will be other opportunities to engage with this process. Developing a framework for the governance of AI in Ontario is important, and it is good to see that this work is underway in Ontario. I note that the current consultation focuses on AI for use in the public sector. Similar work needs to be done for the governance of AI that will be developed and deployed in the private sector context. I hope that this work is also being contemplated. As I am sure you know, the federal government has already developed a Directive on Automated Decision-Making (DADM) which applies to a broad range of uses of AI in the federal public sector context. It comes with an algorithmic impact assessment tool. Although I appreciate the sensitivities around sovereignty within a province’s own spheres of competence, there is much to be said for more unified national approaches to many regulatory issues – particularly in the digital context. One option for Ontario is to use the DADM as a starting point for its approach to public sector AI governance, and to assess and adapt it for use in Ontario. This would allow Ontario to take advantage of an approach that is already well developed, and into which a considerable amount of thoughtful work has been invested. It is both unnecessary and counterproductive to reinvent the wheel. Serious consideration should be given – as a matter of public policy – to adopting, where possible, harmonized approaches to the governance of digital technologies. At the same time, I note that the consultation document suggests that Ontario might go beyond a simple internal directive and actually provide an accountability framework that would give individuals direct recourse in cases where government does not meet whatever requirements are established. A public accountability framework is lacking in the federal DADM, and would be most welcome in Ontario. The proposed public sector framework for Ontario is organized around three broad principles: No AI in secret; AI use Ontarians can trust; and AI that serves all Ontarians. These are good, if broad, principles. The real impact of this governance initiative will, of course, lie in its detail. However, it is encouraging to see a commitment to transparency, openness and public participation. It is also important that the government recognize the potential for AI to replicate or exacerbate existing inequities and to commit to addressing equity and inclusion. My comments will address each of the principles in turn. 1. No AI in Secret The consultation document states that “for people to trust that the use of AI is safe and appropriate they must first be aware that the AI exists. As a result, the government needs to be transparent about how, when, and why these tools are used so that people have a right to address potential biases created by the AI algorithms.” I agree. A public register of AI tools in use by government, along with access to details about these tools would be most welcome. I do question, however, what is meant by “government” in this statement. In other words, I would be very interested to know more about the scope of what is being proposed. It was only a short while ago that we learned, for example, that police services in Ontario had made use of Clearview AI’s controversial facial recognition database. In some cases, it seems that senior ranks of the police may not even have been aware of this use. Ontario’s Privacy Commissioner at the time expressed concerns over this practice. This case raises important questions regarding the scope of the proposed commitment to transparency and AI. The first is whether police services will be included under government AI governance commitments – and if they are not, why not, and what measures will be put in place to govern AI used in the law enforcement context. It is also important to know what other agencies or departments will be excluded. A further question is whether AI-related commitments at the provincial level will be extended to municipalities, or whether they are intended only for use in the provincial public sector. Another question is whether the principles will only apply to AI developed within government or commissioned by government. In other words, will any law or guidance developed also apply to the myriad services that might be otherwise be available to government? For example, will new rules apply to the decision by a department to use the services of a human resources firm that makes use of AI in its recruitment processes? Will they apply to workplace monitoring software and productivity analytics services that might be introduced in the public service? On this latter point, I note it is unclear whether the commitment to AI governance relates only to AI that affects the general population as opposed to AI used to manage government employees. These issues of application and scope of any proposed governance framework are important. 2. Use Ontarian’s can Trust The second guiding principle is “Use Ontarians can Trust”. The commitment is framed in these terms: “People building, procuring, and using AI have a responsibility to the people of Ontario that AI never puts people at risk and that proper guardrails are in place before the technology is used by the government.” One of the challenges here is that there are so many types of AI and so many contexts in which AI can be used. Risk is inevitable -- and some of the risks may be of complex harms. In some cases, these harms may be difficult to foresee. The traffic predicting algorithm used as an illustration in this part of the consultation document has fairly clear-cut risk considerations. The main issue will be whether such an algorithm reduces the risk of serious accidents, for example. The risks from an algorithm that determines who is or is not eligible to receive social assistance benefits, on the other hand, will be much more complex. One significant risk will be that people who need the benefit will not receive it. Other risks might include the exacerbation of existing inequalities, or even greater alienation in the face of a seemingly impersonal system. These risks are serious but some are intangible – they might be ignored, dismissed or underestimated. Virginia Eubanks and others have observed that experimentation with the use of AI in government tends to take place in the context of programs and services for the least empowered members of society. This is troubling. The concept of risk must be robust and multifaceted. Decisions about where to deploy AI must be equitable and unbiased – not just the AI. One of the initial recommendations in this section is to propose “ways to update Ontario’s rules, laws and guidance to strengthen the governance of AI, including whether to adopt a risk-based approach to determine when which rules apply.” I agree that work needs to be done to update Ontario’s legal frameworks in order to better address the challenges of AI. Data protection and human rights are two obvious areas where legislative reform may be necessary. It will also be important for those reforms to be accompanied by the necessary resources to handle the complex cases likely to be generated by AI. If legal protections and processes are enhanced without additional resources, the changes will be meaningless. It may also be necessary to consider establishing a regulatory authority for AI that could provide the governance, oversight and accountability specifically required by AI systems, and that could develop the necessary expertise. Challenging algorithmic decision-making will not be easy for ordinary Ontarians. They will need expert assistance and guidance for any challenge that goes beyond asking for an explanation or a reconsideration of the decision. A properly-resourced oversight body can provide this assistance and can develop necessary expertise to assist those who develop and implement AI. 3. AI that Serves all Ontarians The overall goal for this commitment is to ensure that “Government use of AI reflects and protects the rights and values of Ontarians.” The values that are identified are equity and inclusion, as well as accountability. As noted above, there is a tendency to deploy AI systems in ways that impact the most disadvantaged. AI systems are in use in the carceral context, they are used for the administration of social benefits programs, and so on. The very choices as to where to start experimenting with AI are ones that have significant impact. In these contexts, the risks of harm may be quite significant, but the populations impacted may feel most disempowered when it comes to challenging decisions or seeking recourse. This part of the consultation document suggests as a potential action the need to “Assess whether the government should prohibit the use of AI in certain use cases where vulnerable populations are at an extremely high risk.” While there likely are contexts in which a risk-based approach would warrant an early ban on AI until the risks can properly addressed, beyond bans, there should also be deliberation about how to use AI in contexts in which individuals are vulnerable. This might mean not rushing to experiment with AI in these areas until we have built a more robust accountability and oversight framework. It may also mean going slowly in certain areas – using only AI-assisted decision making, for example, and carefully studying and evaluating particular use cases.
In closing I would like to note as well the very thoughtful and thorough work being done by the Law Commission of Ontario on AI and Governance, which has a particular focus on the public sector. I hope that any policy development being done in this area will make good use of the Law Commission’s work.
Published in
Privacy
Thursday, 04 October 2018 05:43
Artist sued in Canada for copyright infringement for AI-related art project
A law suit filed in Montreal this summer raises novel copyright arguments regarding AI-generated works. The plaintiffs are artist Amel Chamandy and Galerie NuEdge Fine Arts (which sells and exhibits her art). They are suing artist Adam Basanta for copyright and trademark infringement. (The trademark infringement arguments are not discussed in this post). Mr Basanta is a world renowned new media artist who experiments with AI in his work. (See the Globe and Mail story by Chris Hannay on this law suit here). According to a letter dated July 4, filed with the court, Mr. Basanta’s current project is “to explore connections between mass technologies, using those technologies themselves.” He explains his process in a video which can be found here. Essentially, he has created what he describes as an “art-factory” that randomly generates images without human input. The images created are then “analyzed by a series of deep-learning algorithms trained on a database of contemporary artworks in economic and institutional circulation” (see artist’s website). The images used in the database of artworks are found online. Where the analysis finds a match of more than 83% between one of the randomly generated images and an image in the database, the randomly generated image is presented online with the percentage match, the title of the painting it matches, and the artist’s name. This information is also tweeted out. The image of the painting that matches the AI image is not reproduced or displayed on the website or on Twitter. One of Mr Basanta’s images was an 85.81% match with a painting by Ms Chamandy titled “Your World Without Paper”. This information was reported on Mr Basanta’s website and Twitter accounts along with the machine-generated image which resulted in the match. The copyright infringement allegation is essentially that “the process used by the Defendant to compare his computer generated images to Amel Chamandy’s work necessarily required an unauthorized copy of such a work to be made.” (Statement of Claim, para 30). Ms Chamandy claims statutory damages of up to $20,000 for the commercial use of her work. Mr Basanta, for his part, argues that there is no display of Ms Chamandy’s work, and therefore no infringement. AI has been generating much attention in the copyright world. AI algorithms need to be ‘trained’ and this training requires that they be fed a constant supply of text, data or images, depending upon the algorithm. Rights holders argue that the use of their works in this way without consent is infringement. The argument is that the process requires unauthorized copies to be fed into the system for algorithmic analysis. Debates have raged in the EU over a text-and-data mining exception to copyright infringement which would make this type of use of copyright protected works acceptable so long as it is for research purposes. Other uses would require clearance for a fee. There has already been considerable debate in Europe over whether research is a broad enough basis for the exception and what activities it would include. If a similar exception is to be adopted in Canada in the next round of copyright reform, we will face similar challenges in defining its boundaries. Of course, the Chamandy case is not the conventional text and data mining situation. The copied image is not used to train algorithms. Rather, it is used in an analysis to assess similarities with another image. But such uses are not unknown in the AI world. Facial recognition technologies match live captured images with stored face prints. In this case, the third party artwork images are like the stored face prints. It is AI, just not the usual text and data mining paradigm. This should also raise questions about how to draft exceptions or to interpret existing exceptions to address AI-related creativity and innovation. In the US, some argue that the ‘fair use’ exception to infringement is broad enough to support text and data mining uses of copyright protected works since the resulting AI output is transformative. Canada’s fair dealing provisions are less generous than U.S. fair use, but it is still possible to argue that text and data mining uses might be ‘fair’. Canadian law recognizes fair dealing for the purposes of research or private study, so if an activity qualifies as ‘research’ it might be fair dealing. The fairness of any dealing requires a contextual analysis. In this case the dealing might be considered fair since the end result only reports on similarities but does not reproduce any of the protected images for public view. The problem, of course, with fair dealing defences is that each case turns on its own facts. The fact-dependent inquiry necessary for a fair dealing defense could be a major brake on innovation and creativity – either by dissuading uses out of fear of costly infringement claims or by driving up the cost of innovation by requiring rights clearance in order to avoid being sued. The claim of statutory damages here is also interesting. Statutory damages were introduced in s. 38.1 of the Copyright Act to give plaintiffs an alternative to proving actual damage. For commercial infringements, statutory damages can range from $500 to $20,000 per work infringed; for non-commercial infringement the range is $100 to $5,000 for all infringements and all works involved. A judge’s actual award of damages within these ranges is guided by factors that include the need for deterrence, and the conduct of the parties. Ms Chamandy asserts that Mr Basanda’s infringement is commercial, even though the commercial dimension is difficult to see. It would be interesting to consider whether the enhancement of his reputation or profile as an artist or any increase in his ability to obtain grants would be considered “commercial”. Beyond the challenge of identifying what is commercial activity in this context, it opens a window into the potential impact of statutory damages in text and data mining activities. If such activities are considered to infringe copyright and are not clearly within an exception, then in Canada, a commercial text and data miner who consumes – say 500,000 different images to train an algorithm – might find themselves, even on the low end of the spectrum, liable for $250 million dollars in statutory damages. Admittedly, the Act contains a clause that gives a judge the discretion to reduce an award of statutory damages if it is “grossly out of proportion to the infringement”. However, not knowing what a court might do or by how much the damages might be reduced creates uncertainty that can place a chill on innovation. Although in this case, there may well be a good fair dealing defence, the realities of AI would seem to require either a clear set of exceptions to clarify infringement issues, or some other scheme to compensate creators which expressly excludes resort to statutory damages. The vast number of works that might be consumed to train an algorithm for commercial purposes makes statutory damages, even at the low end of the scale, potentially devastating and creates a chill.
Published in
Copyright Law
|
Electronic Commerce and Internet Law in Canada, 2nd EditionPublished in 2012 by CCH Canadian Ltd. Intellectual Property for the 21st CenturyIntellectual Property Law for the 21st Century: Interdisciplinary Approaches |