Teresa Scassa - Blog

Displaying items by tag: Privacy

Digital and data governance is challenging at the best of times. It has been particularly challenging in the context of Sidewalk Labs’ proposed Quayside development for a number of reasons. One of these is (at least from my point of view) an ongoing lack of clarity about who will ‘own’ or have custody or control over all of the data collected in the so-called smart city. The answer to this question is a fundamentally important piece of the data governance puzzle.

In Canada, personal data protection is a bit of a legislative patchwork. In Ontario, the collection, use or disclosure of personal information by the private sector, and in the course of commercial activity, is governed by the federal Personal Information Protection and Electronic Documents Act (PIPEDA). However, the collection, use and disclosure of personal data by municipalities and their agencies is governed by the Municipal Freedom of Information and Protection of Privacy Act (MFIPPA), while the collection, use and disclosure of personal data by the province is subject to the Freedom of Information and Protection of Privacy Act (FIPPA). The latter two statutes – MFIPPA and FIPPA – contain other data governance requirements for public sector data. These relate to transparency, and include rules around access to information. The City of Toronto also has information management policies and protocols, including its Open Data Policy.

The documentation prepared for the December 13, 2018 Digital Strategy Advisory Panel (DSAP) meeting includes a slide that sets out implementation requirements for the Quayside development plan in relation to data and digital governance. A key requirement is: “Compliance with or exceedance of all applicable laws, regulations, policy documents and contractual obligations” (page 95). This is fine in principle, but it is not enough on its own to say that the Quayside project must “comply with all applicable laws”. At some point, it is necessary to identify what those applicable laws are. This has yet to be done. And the answer to the question of which laws apply in the context of privacy, transparency and data governance, depends upon who ultimately is considered to ‘own’ or have ‘custody or control’ of the data.

So – whose data is it? It is troubling that this remains unclear even at this stage in the discussions. The fact that Sidewalk Labs has been asked to propose a data governance scheme suggests that Sidewalk and Waterfront may be operating under the assumption that the data collected in the smart city development will be private sector data. There are indications buried in presentations and documentation that also suggest that Sidewalk Labs considers that it will ‘own’ the data. There is a great deal of talk in meetings and in documents about PIPEDA, which also indicates that there is an assumption between the parties that the data is private sector data. But what is the basis for this assumption? Governments can contract with a private sector company for data collection, data processing or data stewardship – but the private sector company can still be considered to act as an agent of the government, with the data being legally under the custody or control of the government and subject to public sector privacy and freedom of information laws. The presence of a private sector actor does not necessarily make the data private sector data.

If the data is private sector data, then PIPEDA will apply, and there will be no applicable access to information regime. PIPEDA also has different rules regarding consent to collection than are found in MFIPPA. If the data is considered ultimately to be municipal data, then it will be subject to MFIPPA’s rules regarding access and privacy, and it will be governed by the City of Toronto’s information management policies. These are very different regimes, and so the question of which one applies is quite fundamental. It is time for there to be a clear and forthright answer to this question.

Published in Privacy

A Global News story about Statistics Canada’s collection of detailed financial data of a half million Canadians has understandably raised concerns about privacy and data security. It also raises interesting questions about how governments can or should meet their obligations to produce quality national statistics in an age of big data.

According to Andrew Russell’s follow-up story, Stats Canada plans to collect detailed customer information from Canada’s nine largest banks. The information sought includes financial information including account balances, transaction data, credit card and bill payments. It is unclear whether the collection has started.

As a national statistical agency, Statistics Canada is charged with the task of collecting and producing data that “ensures Canadians have the key information on Canada's economy, society and environment that they require to function effectively as citizens and decision makers.” Canadians are perhaps most familiar with providing census data to Statistics Canada, including more detailed data through the long form census. However, the agency’s data collection is not limited to the census.

Statistics Canada’s role is important, and the agency has considerable expertise in carrying out its mission and in protecting privacy in the data it collects. This is not to say, however, that Statistics Canada never makes mistakes and never experiences privacy breaches. One of the concerns, therefore, with this large-scale collection of frankly sensitive data is the increased risk of privacy breaches.

The controversial collection of detailed financial data finds its legislative basis in this provision of the Statistics Act:

13 A person having the custody or charge of any documents or records that are maintained in any department or in any municipal office, corporation, business or organization, from which information sought in respect of the objects of this Act can be obtained or that would aid in the completion or correction of that information, shall grant access thereto for those purposes to a person authorized by the Chief Statistician to obtain that information or aid in the completion or correction of that information. [My emphasis]

Essentially, it conveys enormous power on Stats Canada to request “documents or records” from third parties. Non-compliance with a request is an offence under s. 32 of the Act, which carries a penalty on conviction of a fine of up to $1000. A 2017 amendment to the legislation removed the possibility of imprisonment for this offence.

In case you were wondering whether Canada’s private sector data protection legislation offers any protection when it comes to companies sharing customer data with Statistics Canada, rest assured that it does not. Paragraph 7(3)(c.1) of the Personal Information Protection and Electronic Documents Act provides that an organization may disclose personal information without the knowledge or consent of an individual where the disclosure is:

(c.1) made to a government institution or part of a government institution that has made a request for the information, identified its lawful authority to obtain the information and indicated that

[. . .]

(iii) the disclosure is requested for the purpose of administering any law of Canada or a province

According to the Global News story, Statistics Canada notified the Office of the Privacy Commissioner about its data collection plan and obtained the Commissioner’s advice. In his recent Annual Report to Parliament the Commissioner reported on Statistic’s Canada’s growing practice of seeking private sector data:

We have consulted with Statistics Canada (StatCan) on a number of occasions over the past several years to discuss the privacy implications of its collection of administrative data – such as individuals’ mobile phone records, credit bureau reports, electricity bills, and so on. We spoke with the agency about this again in the past year, after a number of companies contacted us with concerns about StatCan requests for customer data.

The Commissioner suggested that Stats Canada might consider the collection of only data that has been deidentified at source rather than detailed personal information. He also recommended an ongoing assessment of the necessity and effectiveness of such programs.

The Commissioner also indicated that one of the problems with the controversial data collection by Statistics Canada is its lack of openness. He stated: “many Canadians might be surprised to learn the government is collecting their information in this way and for this purpose.” While part of this lack of transparency lies in the decision not to be more upfront about the data collection, part of it lies in the fact that the legislation itself – while capable of being read to permit this type of collection – clearly does not expressly contemplate it. Section 13 was drafted in a pre-digital, pre-big data era. It speaks of “documents or records”, and not “data”. While it is possible to interpret it so as to include massive quantities of data, the original drafters no doubt contemplated a collection activity on a much more modest scale. If Section 13 really does include the power to ask any organization to share its data with Stats Canada, then it has become potentially limitless in scope. At the time it was drafted, the limits were inherent in the analogue environment. There was only so much paper Stats Canada could ask for, and only so much paper it had the staff to process. In addition, there was only so much data that entities and organizations collected because they experienced the same limitations. The digital era means not only that there is a vast and increasing amount of detailed data collected by private sector organizations, but that this data can be transferred in large volumes with relative ease, and can be processed and analyzed with equal facility.

Statistics Canada is not the only national statistics organization to be using big data to supplement and enhance its data collection and generation. In some countries where statistical agencies struggle with a lack of human resources and funding, big data from the private sector offer opportunities to meet the data needs of their governments and economies. Statistical agencies everywhere recognize the potential of big data to produce more detailed, fine-grained and reliable data about many aspects of the economy. For example, the United Nations maintains a big data project inventory that catalogues experiments by national statistical agencies around the world with big data analytics. Remember the cancellation of the long form census by the Harper government? This was not a measure to protect Canadians’ privacy by collecting less information; it was motivated by a belief that better and more detailed data could be sought using other means – including reliance on private sector data.

It may well be that Statistics Canada needs the power to collect digital data to assist in data collection programs that serve national interests. However, the legislation that authorizes such collection must be up-to-date with our digital realities. Transparency requires an amendment to the legislation that would specifically enable the collection and use of digital and big data from the private sector tor statistical purposes. Debate over the scope and wording of such a provision would give both the public and the potential third party data sources an opportunity to identify their concerns. It would also permit the shaping of limits and conditions that are specific to the nature and risks of this form of data collection.

Published in Privacy
Wednesday, 12 September 2018 13:44

Smart cities data - governance challenges

This post gives a brief overview of a talk I am giving September 12, 2018, on a panel hosted by the Centre for Law Technology and Society at uOttawa. The panel title is ‘Smart and the City’

 

This post (and my presentation) explores the concept of the ‘smart’ city and lays the groundwork for a discussion of governance by exploring the different types of data collected in so-called smart cities.

Although the term ‘smart city’ is often bandied about, there is no common understanding of what it means. Anthony Townsend has defined smart cities as “places where information technology is combined with infrastructure, architecture, everyday objects, and even our bodies to address social, economic, and environmental problems.” (A. Townsend, Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia. (New York: W.W. Norton & Co., 2013), at p. 15). This definition emphasizes the embedding of information technologies within cities with the goal of solving a broad range of urban problems. Still, there is uncertainty as to which cities are ‘smart’ or at what point a city passes the invisible ‘smart’ threshold.

Embedded technologies are multiple and ever-evolving, and many are already in place in the cities in which we live. Technologies that have become relatively commonplace include smart transit cards, GPS systems on public vehicles (e.g.: buses, snowplows, emergency vehicles, etc.), smart metering for utilities, and surveillance and traffic cameras. Many of the technologies just identified collect data; smart technologies also process data using complex algorithms to generate analytics that can be used in problem identification and problem solving. Predictive policing is an example of a technology that generates information based on input data and complex algorithms.

While it is possible for a smart city to be built from the ground up, this is not the most common type of smart city. Instead, most cities become ‘smarter’ by increments, as governments adopt one technology after another to address particular needs and issues. While both from-the-ground-up and incremental smart cities raise important governance issues, it is the from-the-ground-up projects (such as Sidewalk Toronto) that get the most public attention. With incremental smart cities, the piecemeal adoption of technologies often occurs quietly, without notice, and thus potentially without proper attention being paid to important overarching governance issues such as data ownership and control, privacy, transparency, and security.

Canada has seen two major smart cities initiatives launched in the last year. These are the federal government’s Smart Cities Challenge – a contest between municipalities to fund the development of smart cities projects – and the Sidewalk Toronto initiative to create a from-the-ground-up smart development in Toronto’s Quayside area. Although Canadian cities have been becoming ‘smart’ by increments for some time now, these two high-profile initiatives have sparked discussion of the public policy issues, bringing important governance issues to the forefront.

These initiatives, like many others, have largely been conceived of and presented to the public as technology, infrastructure, and economic development projects. Rather than acknowledging up-front the need for governance innovation to accompany the emerging technologies, governance tends to get lost in the hype. Yet it is crucial. Smart cities feed off data, and residents are primary sources. Much of the data collected in smart cities is personal information, raising obvious privacy issues. Issues of ownership and control over smart cities data (whether personal or non-personal) are also important. They are relevant to who gets to access and use the data, for what purposes, and for whose profit. The public outcry over the Sidewalk Toronto project (examples here, here and here) clearly demonstrates that cities are not just tech laboratories; they are the places where we try to live decent and meaningful lives.

The governance issues facing so-called smart cities are complex. They may be difficult to disentangle from the prevailing ‘innovate or perish’ discourse. They are also rooted in technologies that are rapidly evolving. Existing laws and legal and policy frameworks may not be fully adequate to address smart cities challenges. This means that the governance issues raised by smart cities may require a rethinking of the existing law and policy infrastructure almost at pace with the emerging and evolving technologies.

The complexity of the governance challenges may be better understood when one considers the kind of data collected in smart cities. The narrower the categories of data, the more manageable data governance in the smart city will seem. However, the nature of information technologies, including the types and locations of sensors, and the fact that many smart cities are built incrementally, require a broad view of the types of data at play in smart cities. Here are some kinds of data collected and used in smart cities:

 

· traditional municipal government data (e.g. data about registrants or applicants for public housing or permits; data about water consumption, infrastructure, waste disposal, etc.)

· data collected by public authorities on behalf of governments (eg: electrical consumption data; transit data, etc.)

· sensor data (e.g.: data from embedded sensors such as traffic cameras, GPS devices, environmental sensors, smart meters)

· data sourced from private sector companies (e.g.: data about routes driven or cycled from companies such as Waze or Strava; social media data, etc.)

· data from individuals as sensors (e.g. data collected about the movements of individuals based on signals from their cell phones; data collected by citizen scientists; crowd-sourced data, etc.)

· data that is the product of analytics (e.g. predictive data, profiles, etc.)

 

Public sector access to information and protection of privacy legislation provides some sort of framework for transparency and privacy when it comes to public sector data, but clearly such legislation is not well adapted to the diversity of smart cities data. While some data will be clearly owned and controlled by the municipality, other data will not be. Further the increasingly complex relationship between public and private sectors around input data and data analytics means that there will be a growing number of conflicts between rights of access and transparency on the one hand, and the protection of confidential commercial information on the other.

Given that few ‘smart’ cities will be built from the ground up (with the potential for integrated data governance mechanisms), the complexity and diversity of smart cities data and technologies creates a stark challenge for developing appropriate data governance.

 

(Sorry to leave a cliff hanger – I have some forthcoming work on smart cities data governance which I hope will be published by the end of this year. Stay tuned!)

 

 

Published in Privacy

A recent Federal Court decision highlights the risks to privacy that could flow from unrestrained access by government to data in the hands of private sector companies. It also demonstrates the importance of judicial oversight in ensuring transparency and the protection of privacy.

The Income Tax Act (ITA) gives the Minister of National Revenue (MNR) the power to seek information held by third parties where it is relevant to the administration of the income tax regime. However, where the information sought is about unnamed persons, the law requires judicial oversight. A judge of the Federal Court must review and approve the information “requirement”. Just such a matter arose in Canada (Minister of National Revenue) v. Hydro-Québec. The MNR sought information from Hydro-Québec, the province’s electrical utility, about a large number of its business customers. Only a few classes of customers, such as heavy industries that consumed very large amounts of electricity were excluded. Hydro itself did not object to the request and was prepared to fulfil it if ordered to do so by the Federal Court. The request was considered by Justice Roy who noted that because the information was about unnamed and therefore unrepresented persons, it was “up to the Court to consider their interests.” (at para 5)

Under s. 231.2(3) of the ITA, before ordering the disclosure of information about unnamed persons, a must be satisfied that:

(a) the person or group is ascertainable; and

(b) the requirement is made to verify compliance by the person or persons with any duty or obligation under this Act.

The information sought from Hydro in digital format included customer names, business numbers, full billing addresses, addresses of each place where electricity is consumed, telephone numbers associated with the account, billing start dates, and, if applicable, end dates, and any late payment notices sent to the customer.

Justice Roy noted that no information had been provided to the court to indicate whether the MNR had any suspicions about the tax compliance of business customers of Hydro-Quebec. Nor was there much detail about what the MNR planned to do with the information. The documents provided by the MNR, as summarized by the Court, stated that the MNR was “looking to identify those who seem to be carrying on a business but failed to file all the required income tax returns.” (at para 14) However, Justice Roy noted that there were clearly also plans to share the information with other groups at the Canada Revenue Agency (CRA). These groups would use the information to determine “whether the individuals and companies complied with their obligations under the ITA and the ETA”. (at para 14)

Justice Roy was sympathetic to the need of government to have powerful means of enforcing tax laws that depend upon self-reporting of income. However, he found that what the MNR was attempting to do under s. 231.2 went too far. He ruled that the words used in that provision had to be interpreted in light of “the right of everyone to be left alone by the state”. (at para 28) He observed that it is clear from the wording of the Act that “Parliament wanted to limit the scope of the Minister’s powers, extensive as they are.” (at para 68)

Justice Roy carefully reviewed past jurisprudence interpreting s. 231.2(3). He noted that the section has always received a strict interpretation by judges. In past cases where orders had been issued, the groups of unnamed persons about whom information was sought were clearly ascertainable, and the information sought was ‘directly related to these taxpayers’ tax status because it is financial in nature.” (at para 63) In the present case, he found that the group was not ascertainable, and the information sought “has nothing to do with tax-status.” (at para 63)

In his view, the aim of the request was to determine the identity of business customers of Hydro-Québec. The information was not sought in relation to a good faith audit, and with a proper factual basis. Because it was a fishing expedition meant to determine who might suitably be audited, the group of individuals identified by Hydro-Québec could not be considered “ascertainable”, as was required by the law. Justice Roy noted that no information was provided to demonstrate what “business customer” meant. He observed that “the Minister would render the concept of “ascertainable group” meaningless if, in the context of the ITA, she may claim that any group is an ascertainable group.” (at para 78) He opined that giving such broad meaning to “ascertainable” could be an abuse that would lead to violations of privacy by the state.

Justice Roy also found that the second condition of s. 231.2(3) was not met. Section 231.2(3)(b) required that the information be sought in order “to verify compliance by the person or persons in the group with any duty or obligation under this Act.” He observed that the MNR was seeking an interpretation of this provision that would amount to: “Any information the Minister may consider directly or indirectly useful”. (at para 80) Justice Roy favoured a much more restrictive interpretation, limiting it to information that could “shed light on compliance with the Act.” (at para 80) He found that “the knowledge of who has a business account with Hydro-Québec does not meet the requirement of a more direct connection between the information and documents and compliance with the Act.” (at para 80)

The MNR had argued that if the two conditions of s. 231.2(3) were met, then a judge was required to issue the authorization. Because Justice Roy found the two conditions were not met, the argument was moot. Nevertheless, he noted that even if he had found the conditions to be met, he would still have had the discretion to deny the authorization if to grant it would harm the public interest. In this case, there would be a considerable invasion of privacy “given the number of people indiscriminately included in the requirement for which authorization of the Court is being sought. (at para 88) He also found that the fact that digital data was sought increased the general risk of harm. He observed that “the applicant chose not to restrict the use she could make of the large quantity of information she received” (at para 91) and that it was clearly planned that the information would be shared within the CRA. Justice Roy concluded that even if he erred in his interpretation of the criteria in s. 231.2(3), and these criteria had to be given a broad meaning, he would still not have granted the authorization on the basis that “judicial intervention is required to prevent such an invasion of the privacy of many people in Quebec.” (at para 96) Such intervention would particularly be required where “the fishing expedition is of unprecedented magnitude and the information being sought is far from serving to verify compliance with the Act.” (at para 96)

This is a strong decision which clearly protects the public interest. It serves to highlight the privacy risks in an era where both private and public sectors amass vast quantities of personal information in digital form. Although the ITA provides a framework to ensure judicial oversight in order to limit potential abuses, there are still far too many other contexts where information flows freely and where there may be insufficient oversight, transparency or accountability.

 

Published in Privacy

The report of an investigator for Ontario’s Office of the Information and Privacy Commissioner (OIPC) into personal information contained within a published tribunal decision adds to the debate around how to balance individual privacy with the open courts principle. In this case (Privacy Complaint No. PC17-9), the respondent is the Ontario Human Rights Tribunal (OHRT), established under the Ontario Human Rights Code. The OHRT often hears matters involving highly sensitive personal information. Where an adjudicator considers it relevant to their decision, they may include this information in their written reasons. Although a party may request that the decision be anonymized to protect their personal information, OHRT adjudicators have been sparing in granting requests for anonymization, citing the importance of the open courts principle.

The OIPC investigated after receiving a complaint about the reporting of sensitive personal information in an OHRT decision. The interesting twist in this case was that the personal information at issue was not that of the person who had complained to the OHRT (the ‘OHRT complainant’), and whose complaint had led to the tribunal hearing. Rather, it was the personal information of the OHRT complainant’s sister and mother. The complaint to the OIPC was made by the sister (the ‘OIPC complainant’) on behalf of herself and her mother. Although the sister’s and mother’s names were not used in the OHRT decision, they argued that they were easily identifiable since they lived in a small town and shared a distinctive surname with the OHRA complainant. The OIPC investigator agreed. She noted that the information at the heart of the complaint consisted of “the applicant’s name, the applicant’s mother’s age, the mother’s primary language, the number of medications the applicant’s mother was taking, the reason for the medication, the state of the mother’s memory and the city the complainant resides in.” (at para 19). The investigator found that although the names of the OIPC complainant and her mother were not mentioned, their relationship to the OHRT complainant was. She observed: “Given that the applicant’s name is available, the uniqueness of the names and the size of the community, it is reasonable to assume that someone reading the decision would be able to identify her mother and sister and connect the information in the decision to them.” (at para 26)

Since the OHRT is a public body, and the information at issue was personal information, the OIPC complainant argued that the OHRT had breached the province’s Freedom of Information and Protection of Privacy Act (FIPPA) by publishing this information in its decision. For its part, the OHRT argued that the information was exempted from the application of FIPPA under s. 37 of that Act because it was “personal information that is maintained for the purpose of creating a record that is available to the general public”. It argued that it has an adjudicative mandate under the Human Rights Code and that the Statutory Powers Procedures Act (SPPA) permits it to determine its own practices and procedures. Although neither the OHRC nor the SPPA address the publication of decisions, the OHRT had decided that as a matter of practice, its decisions would be published, including on the public legal information website CanLII. The OHRT also argued that its proceedings were subject to the open courts principle. This argument was supported by the recent Ontario Superior Court decision (discussed here) which confirmed that the open courts principle applied to the decisions of statutory tribunals. The investigator agreed with the OHRT. She observed that “[o]penness at tribunals tends to improve the quality of testimony and for that reason is conducive to the pursuit of truth in adjudicative proceedings.” (at para 56). She noted as well that the other elements of the open courts principle, including “oversight of decision-makers, the integrity of the administration of justice, and the educational and democracy-enhancing features of open courts” (at para 57) were all linked to the Charter value of freedom of expression. She accepted that the publication of reasons for decision was part of the openness principle, and concluded that: “The publication of decisions is an aspect of the Tribunal’s control over its own process and the information that is included in the Tribunal’s decisions is within the adjudicator’s discretion in providing reasons for those decisions.” (at para 65) She noted that many public values were served by the publication of the Tribunal’s decisions: “The publication of its decisions supports public confidence in the justice system, serves an educational purpose, promotes accountability by the Tribunal for its decision-making, and ensures that the public has the information necessary to exercise the Charter right to freedom of expression.” (at para 66) As a result, she concluded that s. 37 of FIPPA excluded the published decisions from the application of the privacy provisions of the Act.

This seems like an appropriate conclusion given the legislative framework. However, it does raise two general points of importance with respect to how the OHRT deals with personal information in its decisions. First, human rights legislation exists in an attempt to provide recourse and redress for those who experience discrimination in contexts which closely affect their lives, such as employment, accommodation, and the receipt of services. The prohibited grounds of discrimination are ones which touch on highly personal and intimate aspects of peoples’ lives, relating to sexual identity, national origin, religion, and mental or physical disability, to provide but a few examples. Personal information of this kind is generally considered highly sensitive. The spectre that it will be published – online – alongside an individual’s name, might be daunting enough to prevent some from seeking redress under the legislation at all. For example, fear that the online publication of one’s mental health information might make it difficult to find future employment could prevent a person from filing of a complaint of discrimination. This would seem to subvert the purpose of human rights legislation. And yet, human rights tribunals have been reticent in granting requests for anonymization, citing the open courts principle.

Secondly, this case raises the further issue of how the sensitive personal information of third parties – who were neither witness before the tribunal or complainants to the OHRC – ended up in a decision published online, and for which the Tribunal had refused an anonymization request. The OIPC investigator concluded her report by recommending that the OHRT “continue to apply data minimization principles in the drafting of its decisions and include only personal information necessary to achieve the purpose of those decisions.” (at para 72) In the absence of clear directives for dealing with the online publication of personal information in court or tribunal decisions, and appropriate training for adjudicators, this gentle reminder seems to be the best that complainants can hope for. It is not good enough. One need only recall the complaints to the Office of the Privacy Commissioner of Canada about the offshore website that had scraped decisions from CanLII and court websites in order to make them available in fully indexable form over the internet, to realize that we have important unresolved issues about how personal information is published and disseminated in court and tribunal decisions in Canada.

Published in Privacy

Some years ago, a reporter from the Toronto Star filed an access to information request to obtain the names of the top 100 physician billers to Ontario’s Health Insurance Program (OHIP). She also sought the amounts billed, and the physicians’ fields of specialization. The information was in the hands of the Ministry of Health and Long-Term Care, and the request was made under Ontario’s Freedom of Information and Protection of Privacy Act (FIPPA). The Ministry refused to disclose the records on the basis that they constituted the physicians’ personal information. An adjudicator with the Ontario Information and Privacy Commissioner’s Office disagreed, and ordered disclosure. An appeal by the Ontario Medical Association (OMA) to the Ontario Divisional Court was unsuccessful (discussed here). On August 3, 2018, the Ontario Court of Appeal dismissed the OMA’s further appeal of that decision.

The relatively brief and unanimous Court of Appeal decision made short work of the OMA’s arguments. The Court found that the adjudicator’s determination that the information was not personal information was reasonable. FIPPA specifically excludes from the definition of personal information “the name, title, contact information or designation of an individual that identifies the individual in a business, professional or official capacity”. The OMA had argued that the disclosure of the names in conjunction with the billing information meant that the disclosure would include personal information that “describes an individual’s finances, income, assets, liabilities…”. FIPPA provides in s. 21(3) that the disclosure of personal information is presumptively an invasion of privacy when it falls within this category. However, the Court found that the billing information constituted “the affected physicians’ gross revenue before allowable business expenses such as office, personnel, lab equipment, facility and hospital expenses.” (at para 25) The Court agreed with the adjudicator that the gross billing information did not reveal the actual income of the physicians. It stated: “where, as here, an individual’s gross professional or business income is not a reliable indicator of the individual’s actual personal finances or income, it is reasonable to conclude not only that the billing information is not personal information as per s. 2(1), but also that it does not describe “an individual’s finances [or] income”, for the purpose of s. 21(3)(f).” (at para 26)

The OMA had resisted disclosure because the billing information might give the public, who might not understand the costs associated with running a medical practice, a distorted idea of the physicians’ personal finances. Ironically, the Court found that the differences between billing information and actual income were so different that it did not amount to personal information. The OMA had objected to what it considered to be the OIPC’s changed position on the nature of this type of information; in the past, the OIPC had accepted that this information was personal information and had not ordered disclosure. The Ontario Court of Appeal observed that the adjudicator was not bound to follow precedent; it also observed that there were differences of opinion in past OIPC decisions on this issue, and no clear precedent existed in any event.

The decision is an important one for access to information. A publicly funded health care system consumes substantial resources, and there is a public interest in understanding, analyzing, critiquing and discussing how those resources are spent. The OMA was understandably concerned that public discussions not centre on particular individuals. However, governments have been moving towards greater transparency when it comes to monies paid to specific individuals and businesses, whether they are contractors or even public servants. As the Court of Appeal noted, FIPPA balances access to information with the protection of personal privacy. The public interest clearly prevailed in this instance.

Published in Privacy

A recent Finding from the Office of the Privacy Commissioner of Canada contains a consideration of the meaning of “publicly available information”, particularly as it relates to social media profiles. This issue is particularly significant given a recent recommendation by the ETHI committee in its Report on PIPEDA reform. PIPEDA currently contains a very narrowly framed exception to the requirement of consent for “publicly available information”. ETHI had recommended amending the definition to make it “technologically neutral”. As I argued here, such a change would make it open-season for the collection, use and disclosure of social media profiles of Canadians.

The Finding, issued on June 12, 2018, came after multiple complaints were filed by Canadians about the practices of a New Zealand-based social media company, Profile Technology Ltd (PTL). The company had obtained Facebook user profile data from 2007 and 2008 under an agreement with Facebook. While their plan might have originally been to create a powerful search engine for Facebook, in 2011 they launched their own social media platform. They used the Facebook data to populate their platform with profiles. Individuals whose profiles were created on the site had the option of ‘claiming’ them. PTL also provided two avenues for individuals who wished to delete the profiles. If an email address had been part of the original data obtained from Facebook and was associated with the PTL profile, a user could log in using that email address and delete the account. If no email address was associated with the profile, the company required individuals to set up a helpdesk ticket and to provide copies of official photo identification. A number of the complainants to the OPC indicated that they were unwilling to share their photo IDs with a company that had already collected, used and disclosed their personal information without their consent.

The complainants’ concerns were not simply that their personal information had been taken and used to populate a new social media platform without their consent. They also felt harmed by the fact that the data used by PTL was from 2007-2008, and did not reflect any changes or choices they had since made. One complaint received by the OPC related to the fact that PTL had reproduced a group that had been created on Facebook, but that since had been deleted from Facebook. Within this group, allegations had been made about the complainant that he/she considered defamatory and bullying. The complainant objected to the fact that the group persisted on PTL and that the PTL platform did not permit changes to public groups and the behest of single individuals on the basis that they treated the group description “as part of the profile of every person who has joined that group, therefore modifying the group would be like modifying all of those people’s profiles and we cannot modify their profiles without their consent.” (at para 55)

It should be noted that although the data was initially obtained by PTL from Facebook under licence from Facebook, Facebook’s position was that PTL had used the data in violation of the licence terms. Facebook had commenced proceedings against PTL in 2013 which resulted in a settlement agreement. There was some back and forth over whether the terms of the agreement had been met, but no information was available regarding the ultimate resolution.

The Finding addresses a number of interesting issues. These include the jurisdiction of the OPC to consider this complaint about a New Zealand based company, the sufficiency of consent, and data retention limits. This post focuses only on the issue of whether social media profiles are “publicly available information” within the meaning of PIPEDA.

PTL argued that it was entitled to benefit from the “publicly available information” exception to the requirement for consent for collection and use of personal information because the Facebook profiles of the complainants were “publicly available information”. The OPC disagreed. It noted that the exception for “publicly available information”, found in ss. 7(1)(d) and 7(2)(c.1) of PIPEDA, is defined by regulation. The applicable provision is s. 1(e) of the Regulations Specifying Publicly Available Information, which requires that “the personal information must appear in a publication, the publication must be available to the public, and the personal information has to have been provided by the individual.”(at para 87) The OPC rejected PTL’s argument that “publication” included public Facebook profiles. In its view, the interpretation of “publicly available information” must be “in light of the scheme of the Act, its objects, and the intention of the legislature.” (at para 89) It opined that neither a Facebook profile nor a ‘group’ was a publication. It noted that the regulation makes it clear that “publicly available information” must receive a restrictive interpretation, and reflects “a recognition that information that may be in the public domain is still worthy of privacy protection.” (at para 90) The narrow interpretation of this exception to consent is consistent with the fact that PIPEDA has been found to be quasi-constitutional legislation.

In finding that the Facebook profile information was not publicly available information, the OPC considered that the profiles at issue “were created at a time when Facebook was relatively new and its policies were in flux.” (at para 92) Thus it would be difficult to determine that the intention of the individuals who created profiles at that time was to share them broadly and publicly. Further, at the time the profiles were created, they were indexable by search engines by default. In an earlier Finding, the OPC had determined that this default setting “would not have been consistent with users’ reasonable expectations and was not fully explained to users” (at para 92). In addition, the OPC noted that Facebook profiles were dynamic, and that their ‘owners’ could update or change them at will. In such circumstances, “treating a Facebook profile as a publication would be counter to the intention of the Act, undermining the control users otherwise maintain over their information at the source.” (at para 93) This is an interesting point, as it suggests that the dynamic nature of a person’s online profile prevents it from being considered a publication – it is more like an extension of a user’s personality or self-expression.

The OPC also noted that even though the profile information was public, to qualify for the exception it had to be contributed by the individual. This is not always the case with profile information – in some cases, for example, profiles will include photographs that contain the personal information of third parties.

This Finding, which is not a decision, and not binding on anyone, shows how the OPC interprets the “publicly available information” exception in its home statute. A few things are interesting to note:

· The OPC finds that social media profiles (in this case from Facebook) are different from “publications” in the sense that they are dynamic and reflect an individual’s changing self-expression

· Allowing the capture and re-use, without consent, of self-expression from a particular point in time, robs the individual not only of control of their personal information by of control over how they present themselves to the public. This too makes profile data different from other forms of “publicly accessible information” such as telephone or business directory information, or information published in newspapers or magazines.

· The OPC’s discussion of Facebook’s problematic privacy practices at the time the profiles were created muddies the discussion of “publicly available information”. A finding that Facebook had appropriate rules of consent should not change the fact that social media profiles should not be considered “publicly available information” for the purposes of the exception.

 

It is also worth noting that a complaint against PTL to the New Zealand Office of the Privacy Commissioner proceeded on the assumption that PTL did not require consent because the information was publicly available. In fact, the New Zealand Commissioner ruled that no breach had taken place.

Given the ETHI Report’s recommendation, it is important to keep in mind that the definition of “publicly accessible information” could be modified (although the government’s response to the ETHI report indicates some reservations about the recommendation to change the definition of publicly available information). Because the definition is found in a regulation, a modification would not require legislative amendment. As is clear from the ETHI report, there are a number of industries and organizations that would love to be able to harvest and use social media platform personal information without need to obtain consent. Vigilance is required to ensure that these regulations are not altered in a way that dramatically undermines privacy protection.

 

Published in Privacy

The Supreme Court of Canada has issued its unanimous decision in The Queen v. Philip Morris International Inc. This appeal arose out of an ongoing lawsuit brought by the province of British Columbia against tobacco companies to recover the health care costs associated with tobacco-related illnesses in the province. Similar suits brought by other provincial governments are at different stages across the country. In most cases, the litigation is brought under provincial legislation passed specifically to enable and to structure this recourse.

The central issue in this case concerned the degree of access to be provided to Philip Morris International (PMI)to the databases relied upon by the province to calculate tobacco-related health care costs. PMI wanted access to the databases in order to develop its own experts’ opinions on the nature and extent of these costs, and to challenge the opinions to be provided by provincial experts who would have full access to the databases. Although the databases contained aggregate, de-identified data, the government refused access, citing the privacy interests of British Columbians in their health care data. As a compromise, they offered limited and supervised access to the databases at Statistics Canada Data Centre. Although the other tobacco company defendants accepted this compromise, PMI did not, and sought a court order granting it full access. The court at first instance and later the Court of Appeal for British Columbia sided with PMI and ordered that access be provided. The SCC overturned this order.

This case had been watched with interest by many because of the broader issues onto which it might have shed some light. On one view, the case raised issues about how to achieve fairness in litigation where one party relies on its own vast stores of data – which might include confidential commercial data – and the other party seeks to test the validity or appropriateness of analytics based on this data. What level of access, if any, should be granted, and under what conditions? Another issue of broader interest was, where potentially re-identifiable personal information is sought, what measures are appropriate to protect privacy, including the deemed undertaking rule. Others were interested in knowing what parameters the court might set for assessing the re-identification risk where anonymized data are disclosed.

Those who hoped for broader take-aways for big data, data analytics and privacy, are bound to be disappointed in the decision. In deciding in favour of the BC government, the Supreme Court largely confined its decision to an interpretation of the specific language of the Tobacco Damages and Health Care Costs Recovery Act. The statute offered the government two ways to proceed against tobacco companies – it could seek damages related to the healthcare costs of specific individuals, in which case the health records of those individuals would be subject to discovery, or it could proceed in a manner that considered only aggregate health care data. The BC government chose the latter route. Section 2(5) set out the rules regarding discovery in an aggregate action. The focus of the Supreme Court’s interpretation was s. 2(5)(b) of the Act which reads:

2(5)(b) the health care records and documents of particular individual insured persons or the documents relating to the provision of health care benefits for particular individual insured persons are not compellable except as provided under a rule of law, practice or procedure that requires the production of documents relied on by an expert witness [My emphasis]

While it was generally accepted that this meant that the tobacco companies could not have access to individual health care records, PMI argued that the aggregate data was not a document “relating to the provision of health care benefits for particular individual insured persons”, and therefore its production could be compelled.

The Supreme Court disagreed. Writing for the unanimous court, Justice Brown defined both “records” and “documents” as “means of storing information” (at para 22). He therefore found that the relevant databases “are both “records” and “documents” within the meaning of the Act.” (at para 22) He stated:

Each database is a collection of health care information derived from original records or documents which relate to particular individual insured persons. That information is stored in the databases by being sorted into rows (each of which pertains to a particular individual) and columns (each of which contains information about the field or characteristic that is being recorded, such as the type of medical service provided). (at para 22)

He also observed that many of the fields in the database were filled with data from individual patient records, making the databases “at least in part, collections of health care information taken from individuals’ clinical records and stored in an aggregate form alongside the same information drawn from the records of others.” (at para 23) As a result, the majority found that the databases qualified under the legislation as “documents relating to the provision of health care benefits for particular individual insured persons”, whether or not those individuals were identified within the database.

Perhaps the most interesting passage in the Court’s decision is the following:

The mere alteration of the method by which that health care information is stored — that is, by compiling it from individual clinical records into aggregate databases — does not change the nature of the information itself. Even in an aggregate form, the databases, to the extent that they contain information drawn from individuals’ clinical records, remain “health care records and documents of particular individual insured persons”. (at para 24)

A reader eager to draw lessons for use in other contexts might be see the Court to be saying that aggregate data derived from personal data are still personal data. This would certainly be important in the context of current debates about whether the deidentification of personal information removes it from the scope of private sector data protection laws such as the Personal Information Protection and Electronic Documents Act. But it would be a mistake to read that much into this decision. The latter part of the quoted passage grounds the Court’s conclusion on this point firmly in the language of the BC tobacco legislation. Later the Court specifically rejects the idea that a “particular” individual under the BC statute is the same as an “identifiable individual”.

Because the case is decided on the basis of the interpretation of s. 2(5)(b), the Court neatly avoids a discussion of what degree of reidentification risk would turn aggregate or anonymized data into information about identifiable individuals. This topic is also of great interest in the big data context, particularly in relation to data protection law. And, although it might have been interesting to know whether any degree of reidentification risk could be sufficiently mitigated by the deemed undertaking rule so as to permit discovery remains unexplored territory, those looking for a discussion of the relationship between re-identification risk and the deemed undertaking rule will also have to wait for a different case.

Published in Privacy

The pressure is on for Canada to amend its Personal Information Protection and Electronic Documents Act. The legislation, by any measure, is sorely out of date and not up to the task of protecting privacy in the big data era. We know this well enough – the House of Commons ETHI Committee recently issued a report calling for reform, and the government, in its response has acknowledge the need for changes to the law. The current and past privacy Commissioners have also repeatedly called for reform, as have privacy experts. There are many deficiencies with the law – one very significant one is the lack of serious measures to enforce privacy obligations. In this regard, a recent private member’s bill proposes amendments that would give the Commissioner much more substantial powers of enforcement. Other deficiencies can be measured against the EU’s General Data Protection Regulation (GDPR). If Canada cannot meet the levels of protection offered by the GDPR, personal data flows from the EU to Canada could be substantially disrupted. Among other things, the GDPR addresses issues such as the right to be forgotten, the right to an explanation of how automated decisions are reached, data portability rights, and many other measures specifically designed to address the privacy challenges of the big data era.

There is no doubt that these issues will be the subject of much discussion and may well feature in any proposals to reform PIPEDA that will be tabled in Parliament, perhaps as early as this autumn. The goal of this post is not to engage with these specific issues of reform, as important as they are; rather, it is to tackle another very basic problem with PIPEDA and to argue that it too should be addressed in any legislative reform. Simply put, PIPEDA is a dog’s-breakfast statute that is difficult to read and understand. It needs a top-to-bottom rewriting according to the best principles of plain-language drafting.

PIPEDA’s drafting has been the subject of commentary by judges of the Federal Court who have the task of interpreting it. For example, in Miglialo v. Royal Bank of Canada, Justice Roy described PIPEDA as a “a rather peculiar piece of legislation”, and “not an easily accessible statute”. The Federal Court of Appeal in Telus v. Englander observed that PIPEDA was a “compromise as to form” and that “The Court is sometimes left with little, if any guidance at all”. In Johnson v. Bell Canada, Justice Zinn observed: “While Part I of the Act is drafted in the usual manner of legislation, Schedule 1, which was borrowed from the CSA Standard, is notably not drafted following any legislative convention.” In Fahmy v. Royal Bank of Canada, Justice Roy noted that it was “hardly surprising” “[t]hat a party would misunderstand the scope of the Act.”

To understand why PIPEDA is such a mess requires some history. PIPEDA was passed by Parliament in 2000. Its enactment followed closely on the heels of the EU’s Data Protection Directive, which, like the GDPR, threatened to disrupt data flows to countries that did not meet minimum standards of private sector data protection. Canada needed private sector data protection legislation and it needed it fast. It was not clear that the federal government really had jurisdiction over private sector data protection, but it was felt that the rapid action needed did not leave time to develop cooperative approaches with the provinces. The private sector did not want such legislation. As a compromise, the government decided to use the CSA Model Code – a voluntary privacy code developed with multi-stakeholder input – as the normative heart of the statute. There had been enough buy-in with the Model Code that the government felt that it avoid excessive pushback from the private sector. The Code, therefore, originally drafted to provide voluntary guidance, was turned into law. The prime minister at the time, the Hon. Jean Chretien, did not want Parliament’s agenda overburdened with new bills, so the data protection bill was grafted onto another bill addressing the completely different issue of electronic documents (hence the long, unwieldy name that gives rise to the PIPEDA acronym).

The result is a legislative Frankenstein. Keep in mind that this is a law aimed at protecting individual privacy. It is a kind of consumer-protection statute that should be user-friendly, but it is not. Most applicants to the Federal Court under PIPEDA are self-represented, and they clearly struggle with the legislation. The sad irony is that if a consumer wants to complain to the Privacy Commissioner about a company’s over-long, horribly convoluted, impossible to understand, non-transparent privacy policy, he or she will have to wade through a statute that is like a performance-art parody of that same privacy policy. Of course, the problem is not just one for ordinary consumers. Lawyers and even judges (as evidenced above) find PIPEDA to be impenetrable.

By way of illustration, if you are concerned about your privacy rights and want to know what they are, you will not find them in the statute itself. Instead, the normative provisions are in the CSA Model Code, which is appended as Schedule I of the Act. Part I of the Act contains some definitions, a few general provisions, and a whole raft of exceptions to the principle of consent. Section 6.1 tells you what consent means “for the purposes of clause 4.3 of Schedule 1”, but you will have to wait until you get to the schedule to get more details on consent. On your way to the Schedule you might get tangled up in Part II of the Act which is about electronic documents, and thus thoroughly irrelevant.

Because the Model Code was just that – a model code – it was drafted in a more conversational style, and includes notes that provide examples and illustrations. For the purposes of the statute, some of these notes were considered acceptable – others not. Hence, you will find the following statement in s. 2(2) of PIPEDA: “In this Part, a reference to clause 4.3 or 4.9 of Schedule 1 does not include a reference to the note that accompanies that clause.” So put a yellow sticky tab on clauses 4.3 and 4.9 to remind you not to consider those notes as part of the law (even though they are in the Schedule).

Then there is this: s. 5(2) of PIPEDA tells us: “The word should, when used in Schedule 1, indicates a recommendation and does not impose an obligation.” So use those sticky notes again. Or cross out “should” each of the fourteen times you find it in Schedule 1, and replace it with “may”.

PIPEDA also provides in ss. 7(4) and 7(5) that certain actions are permissible despite what is said in clause 4.5 of Schedule 1. Similar revisionism is found in s. 7.4. While clause 4.9 of Schedule 1 talks about requests for access to personal information made by individuals, section 8(1) in Part 1of the Act tells us those requests have to be made in writing, and s. 8 goes on to provide further details on the right of access. Section 9 qualifies the right of access with “Despite clause 4.9 of Schedule 1….”. You can begin to see how PIPEDA may have contributed significantly to the sales of sticky notes.

If an individual files a complaint and is not satisfied with the Commissioner’s report of findings, he or she has a right to take the matter to Federal Court if their issue fits within s. 14, which reads:

 

14 (1) A complainant may, after receiving the Commissioner’s report or being notified under subsection 12.2(3) that the investigation of the complaint has been discontinued, apply to the Court for a hearing in respect of any matter in respect of which the complaint was made, or that is referred to in the Commissioner’s report, and that is referred to in clause 4.1.3, 4.2, 4.3.3, 4.4, 4.6, 4.7 or 4.8 of Schedule 1, in clause 4.3, 4.5 or 4.9 of that Schedule as modified or clarified by Division 1 or 1.1, in subsection 5(3) or 8(6) or (7), in section 10 or in Division 1.1. [My emphasis]

 

Enough said.

There are a number of very important substantive privacy issues brought about by the big data era. We are inevitably going to see PIPEDA reform in the relatively near future, as a means of not only addressing these issues but of keeping us on the right side of the GDPR. As we move towards major PIPEDA reform, however, the government should seriously consider a crisp rewrite of the legislation. The maturity of Canada’s data protection regime should be made manifest in a statute that no longer needs to lean on the crutch of a model code for its legitimacy. Quite apart from the substance of such a document, it should:

 

· Set out its basic data protection principles in the body of the statute, near the front of the statute, and in a manner that is clear, readable and accessible to a lay public.

· Be a free-standing statute that deals with data protection and that does not deal with unrelated extraneous matters (such as electronic documents).

 

It is not a big ask. British Columbia and Alberta managed to do it when they created their own substantially similar data protection statutes. Canadians deserve good privacy legislation, and they deserve to have it drafted in a manner that is clear and accessible. Rewriting PIPEDA (and hence renaming it) should be part of the coming legislative reform.

Published in Privacy

On June 13, 2018 the Supreme Court of Canada handed down a decision that may have implications for how issues of bias in algorithmic decision-making in Canada will be dealt with. Ewert v. Canada is the result of an eighteen-year struggle by Mr. Ewert, a federal inmate and Métis man, to challenge the use of certain actuarial risk-assessment tools to make decisions about his carceral needs and about his risk of recidivism. His concerns, raised in his initial grievance in 2000, have been that these tools were “developed and tested on predominantly non-Indigenous populations and that there was no research confirming that they were valid when applied to Indigenous persons.” (at para 12) After his grievances went nowhere, he eventually sought a declaration in Federal Court that the tests breached his rights to equality and to due process under the Canadian Charter of Rights and Freedoms, and that they were also a breach of the Corrections and Conditional Release Act (CCRA), which requires the Correctional Service of Canada (CSC) to “take all reasonable steps to ensure that any information about an offender that it uses is as accurate, up to data and complete as possible.” (s. 24(1)). Although the Charter arguments were unsuccessful, the majority of the Supreme Court of Canada agreed with the trial judge that CSC had breached its obligations under the CCRA. Two justices in dissent agreed with the Federal Court of Appeal that neither the Charter nor the CCRA had been breached.

Although this is not explicitly a decision about ‘algorithmic decision-making’ as the term is used in the big data and artificial intelligence (AI) contexts, the basic elements are present. An assessment tool developed and tested using a significant volume of data is used to generate predictive data to aid in decision-making in individual cases. The case also highlights a common concern in the algorithmic decision-making context: that either the data used to develop and train the algorithm, or the assumptions coded into the algorithm, create biases that can lead to inaccurate predictions about individuals who fall outside the dominant group that has influenced the data and the assumptions.

As such, my analysis is not about the particular circumstances of Mr. Ewert, nor is it about the impact of the judgement within the correctional system in Canada. Instead, I parse the decision to see what it reveals about how courts might approach issues of bias in algorithmic decision-making, and what impact the decision may have in this emerging context.

1. ‘Information’ and ‘accuracy’

A central feature of the decision of the majority in Ewert is its interpretation of s. 24(1) of the CCRA. To repeat the wording of this section, it provides that “The Service shall take all reasonable steps to ensure that any information about an offender that it uses is as accurate, up to date and complete as possible.” [My emphasis] In order to conclude that this provision was breached, it was necessary for the majority to find that Mr. Ewert’s test results were “information” within the meaning of this section, and that the CCRA had not taken all reasonable steps to ensure its accuracy.

The dissenting justices took the view that when s. 24(1) referred to “information” and to the requirement to ensure its accuracy, the statute included only the kind of personal information collected from inmates, information about the offence committed, and a range of other information specified in s. 23 of the Act. The dissenting justices preferred the view of the CSC that “information” meant ““primary facts” and not “inferences or assessments drawn by the Service”” (at para 107). The majority disagreed. It found that when Parliament intended to refer to specific information in the CCRA it did so. When it used the term “information” in an unqualified way, as it did in s. 24(1), it had a much broader meaning. Thus, according to the majority, “the knowledge the CSC might derive from the impugned tools – for example, that an offender has a personality disorder or that there is a high risk than an offender will violently reoffend – is “information” about that offender” (at para 33). This interpretation of “information” is an important one. According to the majority, profiles and predictions applied to a person are “information” about that individual.

In this case, the Crown had argued that s. 24(1) should not apply to the predictive results of the assessment tools because it imposed an obligation to ensure that “information” is “as accurate” as possible. It argued that the term “accurate” was not appropriate to the predictive data generated by the tools. Rather, the tools “may have “different levels of predictive validity, in the sense that they predict poorly, moderately well or strongly””. (at para 43) The dissenting justices were clearly influenced by this argument, finding that: “a psychological test can be more or less valid or reliable, but it cannot properly be described as being “accurate” or “inaccurate”.” (at para 115) According to the dissent, all that was required was that accurate records of an inmate’s test scores must be maintained – not that the tests themselves must be accurate. The majority disagreed. In its view, the concept of accuracy could be adapted to different types of information. When applied to psychological assessment tools, “the CSC must take steps to ensure that it relies on test scores that predict risks strongly rather than those that do so poorly.” (at para 43)

It is worth noting that the Crown also argued that the assessment tools were important in decision-making because “the information derived from them is objective and thus mitigates against bias in subjective clinical assessments” (at para 41). While the underlying point is that the tools might produce more objective assessments than individual psychologists who might bring their own biases to an assessment process, the use of the term “objective” to describe the output is troubling. If the tools incorporate biases, or are not appropriately sensitive to cultural differences, then the output is ‘objective’ in only a very narrow sense of the word, and the use of the word masks underlying issues of bias. Interestingly, the majority took the view that if the tools are considered useful “because the information derived from them can be scientifically validated. . . this is all the more reason to conclude that s. 24(1) imposes an obligation on the CSC to take reasonable steps to ensure that the information is accurate.” (at para 41)

It should be noted that while this discussion all revolves around the particular wording of the CCRA, Principle 4.6 of Schedule I of the Personal Information Protection and Electronic Documents Act (PIPEDA) contains the obligation that: “Personal information shall be as accurate, complete, and up-to-date as is necessary for the purposes for which it is to be used.” Further, s. 6(2) of the Privacy Act provides that: “A government institution shall take all reasonable steps to ensure that personal information that is used for an administrative purpose by the institution is as accurate, up-to-date and complete as possible.A similar interpretation of “information” and “accuracy” in these statutes could be very helpful in addressing issues of bias in algorithmic decision-making more broadly.

2. Reasonable steps to ensure accuracy

According to the majority, “[t]he question is not whether the CSC relied on inaccurate information, but whether it took all reasonable steps to ensure that it did not.” (at para 47). This distinction is important – it means that Mr. Ewert did not have to show that his actual test scores were inaccurate, something that would be quite burdensome for him to do. According to the majority, “[s]howing that the CSC failed to take all reasonable steps in this respect may, as a practical matter, require showing that there was some reason for the CSC to doubt the accuracy of information in its possession about an offender.” (at para 47, my emphasis) The majority noted that the trial judge had found that “the CSC had long been aware of concerns regarding the possibility of psychological and actuarial tools exhibiting cultural bias.” (at para 49) The concerns had led to research being carried out in other jurisdictions about the validity of the tools when used to assess certain other cultural minority groups. The majority also noted that the CSC had carried out research “into the validity of certain actuarial tools other than the impugned tools when applied to Indigenous offenders” (at para 49) and that this research had led to those tools no longer being used. However, in this case, in spite of concerns, the CSC had taken no steps to assess the validity of the tools, and it continued to apply them to Indigenous offenders. The majority noted that the CCRA, which set out guiding principles in s. 4, specifically required correctional policies and practices to respect cultural, linguistic and other differences and to take into account “the special needs of women, aboriginal peoples, persons requiring mental health care and other groups” (s. 4(g)) The majority found that this principle “represents an acknowledgement of the systemic discrimination faced by Indigenous persons in the Canadian correctional system.” (at para 53) As a result, it found it incumbent on CSC to give “meaningful effect” to this principle “in performing all of its functions”. In particular, the majority found that “this provision requires the CSC to ensure that its practices, however neutral they may appear to be, do not discriminate against Indigenous persons.”(at para 54) The majority observed that although it has been 25 years since this principle was added to the legislation, “there is nothing to suggest that the situation has improved in the realm of corrections” (at para 60). It expressed dismay that “the gap between Indigenous and non-Indigenous offenders has continued to widen on nearly every indicator of correctional performance”. (at para 60) It noted that “Although many factors contributing to the broader issue of Indigenous over-incarceration and alienation from the criminal justice system are beyond the CSC’s control, there are many matters within its control that could mitigate these pressing societal problems. . . Taking reasonable steps to ensure that the CSC uses assessment tools that are free of cultural bias would be one.”(at para 61) [my emphasis]

According to the majority of the Court, therefore, what is required by s. 24(1) of the CCRA is for the CSC to carry out research into whether and to what extent the assessment tools it uses “are subject to cross-cultural variance when applied to Indigenous offenders.” (at para 67) Any further action would depend on the results of the research.

What is interesting here is that the onus is placed on the CSC (influenced by the guiding principles in the CCRA) to take positive steps to verify the validity of the assessment tools on which it relies. The Court does not specify who is meant to carry out the research in question, what standards it should meet, or how extensive it should be. These are important issues. It should be noted that discussions of algorithmic bias often consider solutions involving independent third-party assessment of the algorithms or the data used to develop them.

3. The Charter arguments

Two Charter arguments were raised by counsel for Mr. Ewert. The first was a s. 7 due process argument. Counsel for Mr. Ewert argued that reliance on the assessment tools violated his right to liberty and security of the person in a manner that was not in accordance with the principles of fundamental justice. The tools were argued to fall short of the principles of fundamental justice because of their arbitrariness (lacking any rational connection to the government objective) and overbreadth. The court was unanimous in finding that reliance on the tools was not arbitrary, stating that “The finding that there is uncertainty about the extent to which the tests are accurate when applied to Indigenous offenders is not sufficient to establish that there is no rational connection between reliance on the tests and the relevant government objective.” (at para 73) Without further research, the extent and impact of any cultural bias could not be known.

Mr. Ewert also argued that the results of the use of the tools infringed his right to equality under s. 15 of the Charter. The Court gave little time or attention to this argument, finding that there was not enough evidence to show that the tools had a disproportionate impact on Indigenous inmates when compared to non-Indigenous inmates.

The Charter is part of the Constitution and applies only to government action. There are many instances in which governments may come to rely upon algorithmic decision-making. While concerns might be raised about bias and discriminatory impacts from these processes, this case demonstrates the challenge faced by those who would raise such arguments. The decision in Ewert suggests that in order to establish discrimination, it will be necessary either to demonstrate discriminatory impacts or effects, or to show how the algorithm itself and/or the data used to develop it incorporate biases or discriminatory assumptions. Establishing any of these things will impose a significant evidentiary burden on the party raising the issue of discrimination. Even where the Charter does not apply and individuals must rely upon human rights legislation, establishing discrimination with complex (and likely inaccessible or non-transparent algorithms and data) will be highly burdensome.

Concluding thoughts

This case raises important and interesting issues that are relevant in algorithmic decision-making of all kinds. The result obtained in this case favoured Mr. Ewert, but it should be noted that it took him 18 years to achieve this result, and he required the assistance of a dedicated team of lawyers. There is clearly much work to do to ensure that fairness and transparency in algorithmic decision-making is accessible and realizable.

Mr. Ewert’s success was ultimately based, not upon human rights legislation or the Charter, but upon federal legislation which required the keeping of accurate information. As noted above, PIPEDA and the Privacy Act impose a similar requirement on organizations that collect, use or disclose personal information to ensure the accuracy of that information. Using the interpretive approach of the Supreme Court of Canada in Ewert v. Canada, this statutory language may provide a basis for supporting a broader right to fair and unbiased algorithmic decision-making. Yet, as this case also demonstrates, it may be challenging for those who feel they are adversely impacted to make their case, absent evidence of long-standing and widespread concerns about particular tests in specific contexts.

 

Published in Privacy
<< Start < Prev 1 2 3 4 5 6 7 8 9 10 Next > End >>
Page 1 of 12

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law