Teresa Scassa - Blog

Displaying items by tag: data protection
Tuesday, 22 January 2019 16:56

Canada's Shifting Privacy Landscape

Note: This article was originally published by The Lawyer’s Daily (www.thelawyersdaily.ca), part of LexisNexis Canada Inc.

In early January 2019, Bell Canada caught the media spotlight over its “tailored marketing program”. The program will collect massive amounts of personal information, including “Internet browsing, streaming, TV viewing, location information, wireless and household calling patterns, app usage and the account information”. Bell’s background materials explain that “advertising is a reality” and that customers who opt into the program will see ads that are more relevant to their needs or interests. Bell promises that the information will not be shared with third party advertisers; instead it will enable Bell to offer those advertisers the ability to target ads to finely tuned categories of consumers. Once consumers opt in, their consent is presumed for any new services that they add to their account.

This is not the first time Bell has sought to collect vast amounts of data for targeted advertising purposes. In 2015, it terminated its short-lived and controversial “Relevant Ads” program after an investigation initiated by the Privacy Commissioner of Canada found that the “opt out” consent model chosen by Bell was inappropriate given the nature, volume and sensitivity of the information collected. Nevertheless, the Commissioner’s findings acknowledged that “Bell’s objective of maximizing advertising revenue while improving the online experience of customers was a legitimate business objective.”

Bell’s new tailored marketing program is based on “opt in” consent, meaning that consumers must choose to participate and are not automatically enrolled. This change and the OPC’s apparent acceptance of the legitimacy of targeted advertising programs in 2015 suggest that Bell may have brought its scheme within the parameters of PIPEDA. Yet media coverage of the new tailored ads program generated public pushback, suggesting that the privacy ground has shifted since 2015.

The rise of big data analytics and the stunning recent growth of artificial intelligence have sharply changed the commercial value of data, its potential uses, and the risks it may pose to individuals and communities. After the Cambridge Analytica scandal, there is also much greater awareness of the harms that can flow from consumer profiling and targeting. While conventional privacy risks of massive personal data collection remain (including the risk of data breaches, and enhanced surveillance), there are new risks that impact not just privacy but consumer choice, autonomy, and equality. Data misuse may also have broader impacts than just on individuals; such impacts may include group-based discrimination, and the kind of societal manipulation and disruption evidenced by the Cambridge Analytica scandal. It is not surprising, then, that both the goals and potential harms of targeted advertising may need rethinking; along with the nature and scope of data on which they rely.

The growth of digital and online services has also led to individuals effectively losing control over their personal information. There are too many privacy policies, they are too long and often obscure, products and services are needed on the fly and with little time to reflect, and most policies are ‘take-it-or-leave-it”. A growing number of voices are suggesting that consumers should have more control over their personal information, including the ability to benefit from its growing commercial value. They argue that companies that offer paid services (such as Bell) should offer rebates in exchange for the collection or use of personal data that goes beyond what is needed for basic service provision. No doubt, such advocates would be dismayed by Bell’s quid pro quo for its collection of massive amounts of detailed and often sensitive personal information: “more relevant ads”. Yet money-for-data schemes raise troubling issues, including the possibility that they could make privacy something that only the well-heeled can afford.

Another approach has been to call for reform of the sadly outdated Personal Information Protection and Electronic Documents Act. Proposals include giving the Privacy Commissioner enhanced enforcement powers, and creating ‘no go zones’ for certain types of information collection or uses. There is also interest in creating new rights such as the right to erasure, data portability, and rights to explanations of automated processing. PIPEDA reform, however, remains a mirage shimmering on the legislative horizon.

Meanwhile, the Privacy Commissioner has been working hard to squeeze the most out of PIPEDA. Among other measures, he has released new Guidelines for Obtaining Meaningful Consent, which took effect on January 1, 2019. These guidelines include a list of “must dos” and “should dos” to guide companies in obtaining adequate consent

While Bell checks off many of the ‘must do’ boxes with its new program, the Guidelines indicate that “risks of harm and other consequences” of data collection must be made clear to consumers. These risks – which are not detailed in the FAQs related to the program – obviously include the risk of data breach. The collected data may also be of interest to law enforcement, and presumably it would be handed over to police with a warrant. A more complex risk relates to the fact that internet, phone and viewing services are often shared within a household (families or roommates) and targeted ads based on viewing/surfing/location could result in the disclosure of sensitive personal information to other members of the household.

Massive data collection, profiling and targeting clearly raise issues that go well beyond simple debates over opt-in or opt-out consent. The privacy landscape is changing – both in terms of risks and responses. Those engaged in data collection would be well advised to be attentive to these changes.

Published in Privacy

In Netlink Computer Inc. (Re), the British Columbia Supreme Court dismissed an application for leave to sue a trustee in bankruptcy for the an alleged improper disposal of assets of a bankrupt company that contained the personal information of the company’s customers.

The issues at the heart of the application first reached public attention in September 2018 when a security expert described in a blog post how he noticed that servers from the defunct company were listed for sale on Craigslist. Posing as an interested buyer, he examined the computers and found that their unwiped hard drives contained what he reported as significant amounts of sensitive customer data, including credit card information and photographs of customer identification documents. Following the blog post, the RCMP and the BC Privacy Commissioner both launched investigations. Kipling Warner, who had been a customer of the defunct company Netlink, filed law suits against Netlink, the trustee in bankruptcy which had disposed of Netlink’s assets, the auction company Able Solutions, which and sold the assets, and Netlink’s landlord. All of the law suits include claims of breach statutory obligations under the Personal Information Protection and Electronic Documents Act, breach of B.C.’s Privacy Act, and breach of B.C.’s Personal Information Protection Act. The plan was to have the law suits certified as class action proceedings. The action against Netlink was stayed due to the bankruptcy. The B.C. Supreme Court decision deals only with the action against the trustee, as leave of the court must be obtained in order to sue a trustee in bankruptcy.

As Master Harper explained in his reasons for decision, the threshold for granting leave to sue a trustee in bankruptcy is not high. The evidence presented in the claim must advance a prima facie case. Leave to proceed will be denied if the proposed action is considered frivolous or vexations, since such a lawsuit would “interfere with the due administration of the bankrupt’s estate by the trustee” (at para 9). Essentially the court must balance the competing interests of the party suing the trustee and the interest in the efficient and timely wrapping up of the bankrupt’s estate.

The decision to dismiss the application in this case was based on a number of factors. Master Harper was not impressed by the fact that the multiple law suits brought against different actors all alleged the same grounds. He described this as a “scattergun approach” that suggested a weak evidentiary foundation. The application was supported by two affidavits, one from Mr. Warner, which he described as being based on inadmissible ‘double hearsay’ and one from the blogger, Mr. Doering. While Master Harper found that the Doering affidavit contained first hand evidence from Doering’s investigation into the servers sold on Craigslist, he noted that Doering himself had not been convinced by the seller’s statements about how he came to be in possession of the servers. The Master noted that this did not provide a basis for finding that it was the trustee in bankruptcy who was responsible. The Master also noted that although an RCMP investigation had been launched at the time of the blog post, it had since concluded with no charges being laid. The Master’s conclusion was that there was no evidence to support a finding that any possible privacy breach “took place under the Trustee’s ‘supervision and control’.” (at para 58)

Although the application was dismissed, the case does highlight some important concerns about the handling of personal information in bankruptcy proceedings. Not only can customer databases be sold as assets in bankruptcy proceedings, Mr Doering’s blog post raised the spectre of computer servers and computer hard drives being disposed of without properly being wiped of the personal data that they contain. Although he dismissed the application to file suit against the Trustee, Master Harper did express some concern about the Trustee’s lack of engagement with some of the issues raised by Mr. Warner. He noted that no evidence was provided by the Trustee “as to how, or if, the Trustee seeks to protect the privacy of customers when a bankrupt’s assets (including customer information) are sold in the bankruptcy process.” (at para 44) This is an important issue, but it is one on which there is relatively little information or discussion. A 2009 blog post from Quebec flags some of the concerns raised about privacy in bankruptcy proceedings; a more recent post suggests that while larger firms are more sophisticated in how they deal with personal information assets, the data in the hands of small and medium sized firms that experience bankruptcy may be more vulnerable.

Published in Privacy

Digital and data governance is challenging at the best of times. It has been particularly challenging in the context of Sidewalk Labs’ proposed Quayside development for a number of reasons. One of these is (at least from my point of view) an ongoing lack of clarity about who will ‘own’ or have custody or control over all of the data collected in the so-called smart city. The answer to this question is a fundamentally important piece of the data governance puzzle.

In Canada, personal data protection is a bit of a legislative patchwork. In Ontario, the collection, use or disclosure of personal information by the private sector, and in the course of commercial activity, is governed by the federal Personal Information Protection and Electronic Documents Act (PIPEDA). However, the collection, use and disclosure of personal data by municipalities and their agencies is governed by the Municipal Freedom of Information and Protection of Privacy Act (MFIPPA), while the collection, use and disclosure of personal data by the province is subject to the Freedom of Information and Protection of Privacy Act (FIPPA). The latter two statutes – MFIPPA and FIPPA – contain other data governance requirements for public sector data. These relate to transparency, and include rules around access to information. The City of Toronto also has information management policies and protocols, including its Open Data Policy.

The documentation prepared for the December 13, 2018 Digital Strategy Advisory Panel (DSAP) meeting includes a slide that sets out implementation requirements for the Quayside development plan in relation to data and digital governance. A key requirement is: “Compliance with or exceedance of all applicable laws, regulations, policy documents and contractual obligations” (page 95). This is fine in principle, but it is not enough on its own to say that the Quayside project must “comply with all applicable laws”. At some point, it is necessary to identify what those applicable laws are. This has yet to be done. And the answer to the question of which laws apply in the context of privacy, transparency and data governance, depends upon who ultimately is considered to ‘own’ or have ‘custody or control’ of the data.

So – whose data is it? It is troubling that this remains unclear even at this stage in the discussions. The fact that Sidewalk Labs has been asked to propose a data governance scheme suggests that Sidewalk and Waterfront may be operating under the assumption that the data collected in the smart city development will be private sector data. There are indications buried in presentations and documentation that also suggest that Sidewalk Labs considers that it will ‘own’ the data. There is a great deal of talk in meetings and in documents about PIPEDA, which also indicates that there is an assumption between the parties that the data is private sector data. But what is the basis for this assumption? Governments can contract with a private sector company for data collection, data processing or data stewardship – but the private sector company can still be considered to act as an agent of the government, with the data being legally under the custody or control of the government and subject to public sector privacy and freedom of information laws. The presence of a private sector actor does not necessarily make the data private sector data.

If the data is private sector data, then PIPEDA will apply, and there will be no applicable access to information regime. PIPEDA also has different rules regarding consent to collection than are found in MFIPPA. If the data is considered ultimately to be municipal data, then it will be subject to MFIPPA’s rules regarding access and privacy, and it will be governed by the City of Toronto’s information management policies. These are very different regimes, and so the question of which one applies is quite fundamental. It is time for there to be a clear and forthright answer to this question.

Published in Privacy

A recent Federal Court decision highlights the risks to privacy that could flow from unrestrained access by government to data in the hands of private sector companies. It also demonstrates the importance of judicial oversight in ensuring transparency and the protection of privacy.

The Income Tax Act (ITA) gives the Minister of National Revenue (MNR) the power to seek information held by third parties where it is relevant to the administration of the income tax regime. However, where the information sought is about unnamed persons, the law requires judicial oversight. A judge of the Federal Court must review and approve the information “requirement”. Just such a matter arose in Canada (Minister of National Revenue) v. Hydro-Québec. The MNR sought information from Hydro-Québec, the province’s electrical utility, about a large number of its business customers. Only a few classes of customers, such as heavy industries that consumed very large amounts of electricity were excluded. Hydro itself did not object to the request and was prepared to fulfil it if ordered to do so by the Federal Court. The request was considered by Justice Roy who noted that because the information was about unnamed and therefore unrepresented persons, it was “up to the Court to consider their interests.” (at para 5)

Under s. 231.2(3) of the ITA, before ordering the disclosure of information about unnamed persons, a must be satisfied that:

(a) the person or group is ascertainable; and

(b) the requirement is made to verify compliance by the person or persons with any duty or obligation under this Act.

The information sought from Hydro in digital format included customer names, business numbers, full billing addresses, addresses of each place where electricity is consumed, telephone numbers associated with the account, billing start dates, and, if applicable, end dates, and any late payment notices sent to the customer.

Justice Roy noted that no information had been provided to the court to indicate whether the MNR had any suspicions about the tax compliance of business customers of Hydro-Quebec. Nor was there much detail about what the MNR planned to do with the information. The documents provided by the MNR, as summarized by the Court, stated that the MNR was “looking to identify those who seem to be carrying on a business but failed to file all the required income tax returns.” (at para 14) However, Justice Roy noted that there were clearly also plans to share the information with other groups at the Canada Revenue Agency (CRA). These groups would use the information to determine “whether the individuals and companies complied with their obligations under the ITA and the ETA”. (at para 14)

Justice Roy was sympathetic to the need of government to have powerful means of enforcing tax laws that depend upon self-reporting of income. However, he found that what the MNR was attempting to do under s. 231.2 went too far. He ruled that the words used in that provision had to be interpreted in light of “the right of everyone to be left alone by the state”. (at para 28) He observed that it is clear from the wording of the Act that “Parliament wanted to limit the scope of the Minister’s powers, extensive as they are.” (at para 68)

Justice Roy carefully reviewed past jurisprudence interpreting s. 231.2(3). He noted that the section has always received a strict interpretation by judges. In past cases where orders had been issued, the groups of unnamed persons about whom information was sought were clearly ascertainable, and the information sought was ‘directly related to these taxpayers’ tax status because it is financial in nature.” (at para 63) In the present case, he found that the group was not ascertainable, and the information sought “has nothing to do with tax-status.” (at para 63)

In his view, the aim of the request was to determine the identity of business customers of Hydro-Québec. The information was not sought in relation to a good faith audit, and with a proper factual basis. Because it was a fishing expedition meant to determine who might suitably be audited, the group of individuals identified by Hydro-Québec could not be considered “ascertainable”, as was required by the law. Justice Roy noted that no information was provided to demonstrate what “business customer” meant. He observed that “the Minister would render the concept of “ascertainable group” meaningless if, in the context of the ITA, she may claim that any group is an ascertainable group.” (at para 78) He opined that giving such broad meaning to “ascertainable” could be an abuse that would lead to violations of privacy by the state.

Justice Roy also found that the second condition of s. 231.2(3) was not met. Section 231.2(3)(b) required that the information be sought in order “to verify compliance by the person or persons in the group with any duty or obligation under this Act.” He observed that the MNR was seeking an interpretation of this provision that would amount to: “Any information the Minister may consider directly or indirectly useful”. (at para 80) Justice Roy favoured a much more restrictive interpretation, limiting it to information that could “shed light on compliance with the Act.” (at para 80) He found that “the knowledge of who has a business account with Hydro-Québec does not meet the requirement of a more direct connection between the information and documents and compliance with the Act.” (at para 80)

The MNR had argued that if the two conditions of s. 231.2(3) were met, then a judge was required to issue the authorization. Because Justice Roy found the two conditions were not met, the argument was moot. Nevertheless, he noted that even if he had found the conditions to be met, he would still have had the discretion to deny the authorization if to grant it would harm the public interest. In this case, there would be a considerable invasion of privacy “given the number of people indiscriminately included in the requirement for which authorization of the Court is being sought. (at para 88) He also found that the fact that digital data was sought increased the general risk of harm. He observed that “the applicant chose not to restrict the use she could make of the large quantity of information she received” (at para 91) and that it was clearly planned that the information would be shared within the CRA. Justice Roy concluded that even if he erred in his interpretation of the criteria in s. 231.2(3), and these criteria had to be given a broad meaning, he would still not have granted the authorization on the basis that “judicial intervention is required to prevent such an invasion of the privacy of many people in Quebec.” (at para 96) Such intervention would particularly be required where “the fishing expedition is of unprecedented magnitude and the information being sought is far from serving to verify compliance with the Act.” (at para 96)

This is a strong decision which clearly protects the public interest. It serves to highlight the privacy risks in an era where both private and public sectors amass vast quantities of personal information in digital form. Although the ITA provides a framework to ensure judicial oversight in order to limit potential abuses, there are still far too many other contexts where information flows freely and where there may be insufficient oversight, transparency or accountability.

 

Published in Privacy

A recent Finding from the Office of the Privacy Commissioner of Canada contains a consideration of the meaning of “publicly available information”, particularly as it relates to social media profiles. This issue is particularly significant given a recent recommendation by the ETHI committee in its Report on PIPEDA reform. PIPEDA currently contains a very narrowly framed exception to the requirement of consent for “publicly available information”. ETHI had recommended amending the definition to make it “technologically neutral”. As I argued here, such a change would make it open-season for the collection, use and disclosure of social media profiles of Canadians.

The Finding, issued on June 12, 2018, came after multiple complaints were filed by Canadians about the practices of a New Zealand-based social media company, Profile Technology Ltd (PTL). The company had obtained Facebook user profile data from 2007 and 2008 under an agreement with Facebook. While their plan might have originally been to create a powerful search engine for Facebook, in 2011 they launched their own social media platform. They used the Facebook data to populate their platform with profiles. Individuals whose profiles were created on the site had the option of ‘claiming’ them. PTL also provided two avenues for individuals who wished to delete the profiles. If an email address had been part of the original data obtained from Facebook and was associated with the PTL profile, a user could log in using that email address and delete the account. If no email address was associated with the profile, the company required individuals to set up a helpdesk ticket and to provide copies of official photo identification. A number of the complainants to the OPC indicated that they were unwilling to share their photo IDs with a company that had already collected, used and disclosed their personal information without their consent.

The complainants’ concerns were not simply that their personal information had been taken and used to populate a new social media platform without their consent. They also felt harmed by the fact that the data used by PTL was from 2007-2008, and did not reflect any changes or choices they had since made. One complaint received by the OPC related to the fact that PTL had reproduced a group that had been created on Facebook, but that since had been deleted from Facebook. Within this group, allegations had been made about the complainant that he/she considered defamatory and bullying. The complainant objected to the fact that the group persisted on PTL and that the PTL platform did not permit changes to public groups and the behest of single individuals on the basis that they treated the group description “as part of the profile of every person who has joined that group, therefore modifying the group would be like modifying all of those people’s profiles and we cannot modify their profiles without their consent.” (at para 55)

It should be noted that although the data was initially obtained by PTL from Facebook under licence from Facebook, Facebook’s position was that PTL had used the data in violation of the licence terms. Facebook had commenced proceedings against PTL in 2013 which resulted in a settlement agreement. There was some back and forth over whether the terms of the agreement had been met, but no information was available regarding the ultimate resolution.

The Finding addresses a number of interesting issues. These include the jurisdiction of the OPC to consider this complaint about a New Zealand based company, the sufficiency of consent, and data retention limits. This post focuses only on the issue of whether social media profiles are “publicly available information” within the meaning of PIPEDA.

PTL argued that it was entitled to benefit from the “publicly available information” exception to the requirement for consent for collection and use of personal information because the Facebook profiles of the complainants were “publicly available information”. The OPC disagreed. It noted that the exception for “publicly available information”, found in ss. 7(1)(d) and 7(2)(c.1) of PIPEDA, is defined by regulation. The applicable provision is s. 1(e) of the Regulations Specifying Publicly Available Information, which requires that “the personal information must appear in a publication, the publication must be available to the public, and the personal information has to have been provided by the individual.”(at para 87) The OPC rejected PTL’s argument that “publication” included public Facebook profiles. In its view, the interpretation of “publicly available information” must be “in light of the scheme of the Act, its objects, and the intention of the legislature.” (at para 89) It opined that neither a Facebook profile nor a ‘group’ was a publication. It noted that the regulation makes it clear that “publicly available information” must receive a restrictive interpretation, and reflects “a recognition that information that may be in the public domain is still worthy of privacy protection.” (at para 90) The narrow interpretation of this exception to consent is consistent with the fact that PIPEDA has been found to be quasi-constitutional legislation.

In finding that the Facebook profile information was not publicly available information, the OPC considered that the profiles at issue “were created at a time when Facebook was relatively new and its policies were in flux.” (at para 92) Thus it would be difficult to determine that the intention of the individuals who created profiles at that time was to share them broadly and publicly. Further, at the time the profiles were created, they were indexable by search engines by default. In an earlier Finding, the OPC had determined that this default setting “would not have been consistent with users’ reasonable expectations and was not fully explained to users” (at para 92). In addition, the OPC noted that Facebook profiles were dynamic, and that their ‘owners’ could update or change them at will. In such circumstances, “treating a Facebook profile as a publication would be counter to the intention of the Act, undermining the control users otherwise maintain over their information at the source.” (at para 93) This is an interesting point, as it suggests that the dynamic nature of a person’s online profile prevents it from being considered a publication – it is more like an extension of a user’s personality or self-expression.

The OPC also noted that even though the profile information was public, to qualify for the exception it had to be contributed by the individual. This is not always the case with profile information – in some cases, for example, profiles will include photographs that contain the personal information of third parties.

This Finding, which is not a decision, and not binding on anyone, shows how the OPC interprets the “publicly available information” exception in its home statute. A few things are interesting to note:

· The OPC finds that social media profiles (in this case from Facebook) are different from “publications” in the sense that they are dynamic and reflect an individual’s changing self-expression

· Allowing the capture and re-use, without consent, of self-expression from a particular point in time, robs the individual not only of control of their personal information by of control over how they present themselves to the public. This too makes profile data different from other forms of “publicly accessible information” such as telephone or business directory information, or information published in newspapers or magazines.

· The OPC’s discussion of Facebook’s problematic privacy practices at the time the profiles were created muddies the discussion of “publicly available information”. A finding that Facebook had appropriate rules of consent should not change the fact that social media profiles should not be considered “publicly available information” for the purposes of the exception.

 

It is also worth noting that a complaint against PTL to the New Zealand Office of the Privacy Commissioner proceeded on the assumption that PTL did not require consent because the information was publicly available. In fact, the New Zealand Commissioner ruled that no breach had taken place.

Given the ETHI Report’s recommendation, it is important to keep in mind that the definition of “publicly accessible information” could be modified (although the government’s response to the ETHI report indicates some reservations about the recommendation to change the definition of publicly available information). Because the definition is found in a regulation, a modification would not require legislative amendment. As is clear from the ETHI report, there are a number of industries and organizations that would love to be able to harvest and use social media platform personal information without need to obtain consent. Vigilance is required to ensure that these regulations are not altered in a way that dramatically undermines privacy protection.

 

Published in Privacy

The pressure is on for Canada to amend its Personal Information Protection and Electronic Documents Act. The legislation, by any measure, is sorely out of date and not up to the task of protecting privacy in the big data era. We know this well enough – the House of Commons ETHI Committee recently issued a report calling for reform, and the government, in its response has acknowledge the need for changes to the law. The current and past privacy Commissioners have also repeatedly called for reform, as have privacy experts. There are many deficiencies with the law – one very significant one is the lack of serious measures to enforce privacy obligations. In this regard, a recent private member’s bill proposes amendments that would give the Commissioner much more substantial powers of enforcement. Other deficiencies can be measured against the EU’s General Data Protection Regulation (GDPR). If Canada cannot meet the levels of protection offered by the GDPR, personal data flows from the EU to Canada could be substantially disrupted. Among other things, the GDPR addresses issues such as the right to be forgotten, the right to an explanation of how automated decisions are reached, data portability rights, and many other measures specifically designed to address the privacy challenges of the big data era.

There is no doubt that these issues will be the subject of much discussion and may well feature in any proposals to reform PIPEDA that will be tabled in Parliament, perhaps as early as this autumn. The goal of this post is not to engage with these specific issues of reform, as important as they are; rather, it is to tackle another very basic problem with PIPEDA and to argue that it too should be addressed in any legislative reform. Simply put, PIPEDA is a dog’s-breakfast statute that is difficult to read and understand. It needs a top-to-bottom rewriting according to the best principles of plain-language drafting.

PIPEDA’s drafting has been the subject of commentary by judges of the Federal Court who have the task of interpreting it. For example, in Miglialo v. Royal Bank of Canada, Justice Roy described PIPEDA as a “a rather peculiar piece of legislation”, and “not an easily accessible statute”. The Federal Court of Appeal in Telus v. Englander observed that PIPEDA was a “compromise as to form” and that “The Court is sometimes left with little, if any guidance at all”. In Johnson v. Bell Canada, Justice Zinn observed: “While Part I of the Act is drafted in the usual manner of legislation, Schedule 1, which was borrowed from the CSA Standard, is notably not drafted following any legislative convention.” In Fahmy v. Royal Bank of Canada, Justice Roy noted that it was “hardly surprising” “[t]hat a party would misunderstand the scope of the Act.”

To understand why PIPEDA is such a mess requires some history. PIPEDA was passed by Parliament in 2000. Its enactment followed closely on the heels of the EU’s Data Protection Directive, which, like the GDPR, threatened to disrupt data flows to countries that did not meet minimum standards of private sector data protection. Canada needed private sector data protection legislation and it needed it fast. It was not clear that the federal government really had jurisdiction over private sector data protection, but it was felt that the rapid action needed did not leave time to develop cooperative approaches with the provinces. The private sector did not want such legislation. As a compromise, the government decided to use the CSA Model Code – a voluntary privacy code developed with multi-stakeholder input – as the normative heart of the statute. There had been enough buy-in with the Model Code that the government felt that it avoid excessive pushback from the private sector. The Code, therefore, originally drafted to provide voluntary guidance, was turned into law. The prime minister at the time, the Hon. Jean Chretien, did not want Parliament’s agenda overburdened with new bills, so the data protection bill was grafted onto another bill addressing the completely different issue of electronic documents (hence the long, unwieldy name that gives rise to the PIPEDA acronym).

The result is a legislative Frankenstein. Keep in mind that this is a law aimed at protecting individual privacy. It is a kind of consumer-protection statute that should be user-friendly, but it is not. Most applicants to the Federal Court under PIPEDA are self-represented, and they clearly struggle with the legislation. The sad irony is that if a consumer wants to complain to the Privacy Commissioner about a company’s over-long, horribly convoluted, impossible to understand, non-transparent privacy policy, he or she will have to wade through a statute that is like a performance-art parody of that same privacy policy. Of course, the problem is not just one for ordinary consumers. Lawyers and even judges (as evidenced above) find PIPEDA to be impenetrable.

By way of illustration, if you are concerned about your privacy rights and want to know what they are, you will not find them in the statute itself. Instead, the normative provisions are in the CSA Model Code, which is appended as Schedule I of the Act. Part I of the Act contains some definitions, a few general provisions, and a whole raft of exceptions to the principle of consent. Section 6.1 tells you what consent means “for the purposes of clause 4.3 of Schedule 1”, but you will have to wait until you get to the schedule to get more details on consent. On your way to the Schedule you might get tangled up in Part II of the Act which is about electronic documents, and thus thoroughly irrelevant.

Because the Model Code was just that – a model code – it was drafted in a more conversational style, and includes notes that provide examples and illustrations. For the purposes of the statute, some of these notes were considered acceptable – others not. Hence, you will find the following statement in s. 2(2) of PIPEDA: “In this Part, a reference to clause 4.3 or 4.9 of Schedule 1 does not include a reference to the note that accompanies that clause.” So put a yellow sticky tab on clauses 4.3 and 4.9 to remind you not to consider those notes as part of the law (even though they are in the Schedule).

Then there is this: s. 5(2) of PIPEDA tells us: “The word should, when used in Schedule 1, indicates a recommendation and does not impose an obligation.” So use those sticky notes again. Or cross out “should” each of the fourteen times you find it in Schedule 1, and replace it with “may”.

PIPEDA also provides in ss. 7(4) and 7(5) that certain actions are permissible despite what is said in clause 4.5 of Schedule 1. Similar revisionism is found in s. 7.4. While clause 4.9 of Schedule 1 talks about requests for access to personal information made by individuals, section 8(1) in Part 1of the Act tells us those requests have to be made in writing, and s. 8 goes on to provide further details on the right of access. Section 9 qualifies the right of access with “Despite clause 4.9 of Schedule 1….”. You can begin to see how PIPEDA may have contributed significantly to the sales of sticky notes.

If an individual files a complaint and is not satisfied with the Commissioner’s report of findings, he or she has a right to take the matter to Federal Court if their issue fits within s. 14, which reads:

 

14 (1) A complainant may, after receiving the Commissioner’s report or being notified under subsection 12.2(3) that the investigation of the complaint has been discontinued, apply to the Court for a hearing in respect of any matter in respect of which the complaint was made, or that is referred to in the Commissioner’s report, and that is referred to in clause 4.1.3, 4.2, 4.3.3, 4.4, 4.6, 4.7 or 4.8 of Schedule 1, in clause 4.3, 4.5 or 4.9 of that Schedule as modified or clarified by Division 1 or 1.1, in subsection 5(3) or 8(6) or (7), in section 10 or in Division 1.1. [My emphasis]

 

Enough said.

There are a number of very important substantive privacy issues brought about by the big data era. We are inevitably going to see PIPEDA reform in the relatively near future, as a means of not only addressing these issues but of keeping us on the right side of the GDPR. As we move towards major PIPEDA reform, however, the government should seriously consider a crisp rewrite of the legislation. The maturity of Canada’s data protection regime should be made manifest in a statute that no longer needs to lean on the crutch of a model code for its legitimacy. Quite apart from the substance of such a document, it should:

 

· Set out its basic data protection principles in the body of the statute, near the front of the statute, and in a manner that is clear, readable and accessible to a lay public.

· Be a free-standing statute that deals with data protection and that does not deal with unrelated extraneous matters (such as electronic documents).

 

It is not a big ask. British Columbia and Alberta managed to do it when they created their own substantially similar data protection statutes. Canadians deserve good privacy legislation, and they deserve to have it drafted in a manner that is clear and accessible. Rewriting PIPEDA (and hence renaming it) should be part of the coming legislative reform.

Published in Privacy

The issue of the application of privacy/data protection laws to political parties in Canada is not new – Colin Bennett and Robin Bayley wrote a report on this issue for the Office of the Privacy Commissioner of Canada in 2012. It gained new momentum in the wake of the Cambridge Analytica scandal when it was brought home to the public in a fairly dramatic way the extent to which personal information might be used not just to profile and target individuals, but to sway their opinions in order to influence the outcome of elections.

In the fallout from Cambridge Analytica there have been a couple of recent developments in Canada around the application of privacy laws to political parties. First, the federal government included some remarkably tepid provisions into Bill C-76 on Elections Act reform. These provisions, which I critique here, require parties to adopt and post a privacy policy, but otherwise contain no normative requirements. In other words, they do not hold political parties to any particular rules or norms regarding their collection, use or disclosure of personal information. There is also no provision for independent oversight. The only complaint that can be made – to the Commissioner of Elections – is about the failure to adopt and post a privacy policy. The federal government has expressed surprise at the negative reaction these proposed amendments have received and has indicated a willingness to do something more, but that something has not yet materialized. Meanwhile, it is being reported that the Bill, even as it stands, is not likely to clear the Senate before the summer recess, putting in doubt the ability of any amendments to be in place and implemented in time for the next election.

Meanwhile, on June 6 2018, the Quebec government introduced Bill no 188 into the National Assembly. If passed, this Bill would give the Quebec Director General of Elections the duty to examine and evaluate the practices of the provincial political parties’ collection, use and disclosure of personal information. The Director General must also assess their information security practices. If the Bill is passed into law, he will be required to report his findings to the National Assembly no later than the first of October 2019. The Director General will make any recommendations in this report that he feels are appropriate in the circumstances. The Bill also modifies laws applicable to municipal and school board elections so that the Director-General can be directed by the National Assembly to conduct a similar assessment and report back. While this Bill would not make any changes to current practices in the short term, it is clearly aimed at gathering data with a view to informing any future legislative reform that might be deemed necessary.

 

Published in Privacy

In the wake of the Cambridge Analytica scandal, Canada’s federal government has come under increased criticism for the fact that Canadian political parties are not subject to existing privacy legislation. This criticism is not new. For example, Prof. Colin Bennett and Robin Bayley wrote a report on the issue for the Office of the Privacy Commissioner of Canada in 2012.

The government’s response, if it can be called a response, has come in Bill C-76, the Act to amend the Canada Elections Act and other Acts and to make certain consequential amendments which was introduced in the House of Commons on April 30, 2018. This Bill would require all federal political parties to have privacy policies in order to become or remain registered. It also sets out what must be included in the privacy policy.

By way of preamble to this critique of the legislative half-measures introduced by the government, it is important to note that Canada already has both a public sector Privacy Act and a private sector Personal Information Protection and Electronic Documents Act (PIPEDA). Each of these statutes sets out rules for collection, use and disclosure of personal information and each provides for an oversight regime and a complaints process. Both statutes have been the subject of substantial critique for not going far enough to address privacy concerns, particularly in the age of big data. In February 2018, the House of Commons Standing Committee on Access to Information, Privacy and Ethics issued a report on PIPEDA, and recommended some significant amendments to adapt the statute to protecting privacy in a big data environment. Thus, the context in which the provisions regarding political parties’ privacy obligations are introduced is one in which a) we already have privacy laws that set data protection standards; b) these laws are generally considered to be in need of significant amendment to better address privacy; and c) the Cambridge Analytica scandal has revealed just how complex, problematic and damaging the misuse of personal information in the context of elections can be.

Once this context is understood, the privacy ‘obligations’ that the government proposes to place on political parties in the proposed amendments can be seen for what they are: an almost contemptuous and entirely cosmetic quick fix designed to deflect attention from the very serious privacy issues raised by the use of personal information by political parties.

First, the basic requirement placed on political parties will be to have a privacy policy. The policy will also have to be published on the party’s internet site. That’s pretty much it. Are you feeling better about your privacy yet?

To be fair, the Bill also specifies what the policy must contain:

(k) the party’s policy for the protection of personal information [will include]:

(i) a statement indicating the types of personal information that the party collects and how it collects that information,

(ii) a statement indicating how the party protects personal information under its control,

(iii) a statement indicating how the party uses personal information under its control and under what circumstances that personal information may be sold to any person or entity,

(iv) a statement indicating the training concerning the collection and use of personal information to be given to any employee of the party who could have access to personal information under the party’s control,

(v) a statement indicating the party’s practices concerning

(A) the collection and use of personal information created from online activity, and

(B) its use of cookies, and

(vi) the name and contact information of a person to whom concerns regarding the party’s policy for the protection of personal information can be addressed; and

(l) the address of the page — accessible to the public — on the party’s Internet site where its policy for the protection of personal information is published under subsection (4).

It is particularly noteworthy that unlike PIPEDA (or any other data protection law, for that matter), there is no requirement to obtain consent to any collection, use or disclosure of personal information. A party’s policy simply has to tell you what information it collects and how. Political parties are also not subject to any of the other limitations found in PIPEDA. There is no requirement that the purposes for collection, use or disclosure meet a reasonableness standard; there is no requirement to limit collection only to what is necessary to achieve any stated purposes; there is nothing on data retention limits; and there is no right of access or correction. And, while there is a requirement to identify a contact person to whom any concerns or complaints may be addressed, there is no oversight of a party’s compliance with their policy. (Note that it would be impossible to oversee compliance with any actual norms, since none are imposed). There is also no external complaints mechanism available. If a party fails to comply with requirements to have a policy, post it, and provide notice of any changes, it can be deregistered. That’s about it.

This is clearly not good enough. It is not what Canadians need or deserve. It does not even come close to meeting the standards set in PIPEDA, which is itself badly in need of an overhaul. The data resources and data analytics tools available to political parties have created a context in which data protection has become important not just to personal privacy values but to important public values as well, such as the integrity and fairness of elections. Not only are these proposed amendments insufficient to meet the privacy needs of Canadians, they are shockingly cynical in their attempt to derail the calls for serious action on this issue.

Published in Privacy

The post is the second in a series that looks at the recommendations contained in the report on the Personal Information Protection and Electronic Documents Act (PIPEDA) issued by the House of Commons Standing Committee on Access to Information and Privacy Ethics (ETHI). My first post considered ETHI’s recommendation to retain consent at the heart of PIPEDA with some enhancements. At the same time, ETHI recommended some new exceptions to consent. This post looks at one of these – the exception relating to publicly available information.

Although individual consent is at the heart of the PIPEDA model – and ETHI would keep it there – the growing number of exceptions to consent in PIPEDA is reason for concern. In fact, the last round of amendments to PIPEDA in the 2015 Digital Privacy Act, saw the addition of ten new exceptions to consent. While some of these were relatively uncontroversial (e.g. making it clear that consent was not needed to communicate with the next of kin of an injured, ill or deceased person) others were much more substantial in nature. In its 2018 report ETHI has made several recommendations that continue this trend – creating new contexts in which individual consent will no longer be required for the collection, use or disclosure of personal information. In this post, I focus on one of these – the recommendation that the exception to consent for the use of “publicly available information” be dramatically expanded to include content shared by individuals on social media. In light of the recent Facebook/Cambridge Analytica scandal, this recommended change deserves some serious resistance.

PIPEDA already contains a carefully limited exception to consent to the collection, use or disclosure of personal information where it is “publicly available” as defined in the Regulations Specifying Publicly Available Information. These regulations identify five narrowly construed categories of publicly available information. The first is telephone directory information (but only where the subscriber has the option to opt out of being included in the directory). The second is name and contact information that is included in a professional business directory listing that is available to the public; nevertheless, such information can only be collected, used or disclosed without consent where it relates “directly to the purpose for which the information appears in the registry” (i.e. contacting the individual for business purposes). There is a similar exception for information in a public registry established by law (for example, a land titles registry); this information can similarly only be collected, used or disclosed for purposes related to those for which it appears in the record or document. Thus, consent is not required to collect land registry information for the purposes of concluding a real estate transaction. However, it is not permitted to extract personal information from such a registry, without consent, to use for marketing. A fourth category of publicly available personal information is information appearing in court or tribunal records or documents. This respects the open courts principle, but the exception is limited to collection, use or disclosure that relates directly to the purpose for which the information appears in the record or document. This means that online repositories of court and tribunal decisions cannot be mined for personal information; however, personal information can be used without consent to further the open courts principle (for example, a reporter gathering information to use in a newspaper story).

This brings us to the fifth category of publicly available information – the one ETHI would explode to include vast quantities of personal information. Currently, this category reads:

e) personal information that appears in a publication, including a magazine, book or newspaper, in printed or electronic form, that is available to the public, where the individual has provided the information.

ETHI’s recommendation is to make this “technologically neutral” by having it include content shared by individuals over social media. According to ETHI, a “number of witnesses considered this provision to be “obsolete.” (at p. 27) Perhaps not surprisingly, these witnesses represented organizations and associations whose members would love to have unrestricted access to the contents of Canadians’ social media feeds and pages. The Privacy Commissioner was less impressed with the arguments for change. He stated: “we caution against the common misconception that simply because personal information happens to be generally accessible online, there is no privacy interest attached to it.” (at p. 28) The Commissioner recommended careful study with a view to balancing “fundamental individual and societal rights.” This cautious approach seems to have been ignored. The scope of ETHI’s proposed change is particularly disturbing given the very carefully constrained exceptions that currently exist for publicly available information. A review of the Regulations should tell any reader that this was always intended to be a very narrow exception with tightly drawn boundaries; it was never meant to create a free-for-all open season on the personal information of Canadians.

The Cambridge Analytica scandal reveals the harms that can flow from unrestrained access to the sensitive and wide-ranging types and volumes of personal information that are found on social media sites. Yet even as that scandal unfolds, it is important to note that everyone (including Facebook) seems to agree that user consent was both required and abused. What ETHI recommends is an exception that would obviate the need for consent to the collection, use and disclosure of the personal information of Canadians shared on social media platforms. This could not be more unwelcome and inappropriate.

Counsel for the Canadian Life and Health Insurance Association, in addressing ETHI, indicated that the current exception “no longer reflects reality or the expectations of the individuals it is intended to protect.” (at p. 27) A number of industry representatives also spoke of the need to make the exception “technologically neutral”, a line that ETHI clearly bought when it repeated this catch phrase in its recommendation. The facile rhetoric of technological neutrality should always be approached with enormous caution. The ‘old tech’ of books and magazines involved: a) relatively little exposure of personal information; b) carefully mediated exposure (through editorial review, fact-checking, ethical policies, etc.); c) and time and space limitations that tended to focus publication on the public interest. Social media is something completely different. It is a means of peer-to-peer communication and interaction which is entirely different in character and purpose from a magazine or newspaper. To treat it as the digital equivalent is not technological neutrality, it is technological nonsensicality.

It is important to remember that while the exception to consent for publicly available information exists in PIPEDA; the definition of its parameters is found in a regulation. Amendments to legislation require a long and public process; however, changes to regulations can happen much more quickly and with less room for public input. This recommendation by ETHI is therefore doubly disturbing – it could have a dramatic impact on the privacy rights of Canadians, and could do so more quickly and quietly than through the regular legislative process. The Privacy Commissioner was entirely correct in stating that there should be no change to these regulations without careful consideration and a balancing of interests, and perhaps no change at all.

Published in Privacy

The recent scandal regarding the harvesting and use of the personal information of millions of Facebook users in order to direct content towards them aimed at influence their voting behavior raises some interesting questions about the robustness of our data protection frameworks. In this case, a UK-based professor collected personal information via an app, ostensibly for non-commercial research purposes. In doing so he was bound by terms of service with Facebook. The data collection was in the form of an online quiz. Participants were paid to answer a series of questions, and in this sense they consented to and were compensated for the collection of this personal information. However, their consent was to the use of this information only for non-commercial academic research. In addition, the app was able to harvest personal information from the Facebook friends of the study participants – something which took place without the knowledge or consent of those individuals. The professor later sold his app and his data to Cambridge Analytica, which used it to target individuals with propaganda aimed at influencing their vote in the 2016 US Presidential Election.

A first issue raised by this case is a tip-of-the-iceberg issue. Social media platforms – not just Facebook – collect significant amounts of very rich data about users. They have a number of strategies for commercializing these treasure troves of data, including providing access to the platform to app developers or providing APIs on a commercial basis that give access to streams of user data. Users typically consent to some secondary uses of their personal information under the platform’s terms of service (TOS). Social media platform companies also have TOS that set the terms and conditions under which developers or API users can obtain access to the platform and/or its data. What the Cambridge Analytica case reveals is what may (or may not) happen when a developer breaches these TOS.

Because developer TOS are a contract between the platform and the developer, a major problem is the lack of transparency and the grey areas around enforcement. I have written about this elsewhere in the context of another ugly case involving social media platform data – the Geofeedia scandal (see my short blog post here, full article here). In that case, a company under contract with Twitter and other platforms misused the data it contracted for by transforming it into data analytics for police services that allowed police to target protesters against police killings of African American men. This was a breach of contractual terms between Twitter and the developer. It came to public awareness only because of the work of a third party (in that case, the ACLU of California). In the case of Cambridge Analytica, the story also only came to light because of a whistleblower (albeit one who had been involved with the company’s activities). In either instance it is important to ask whether, absent third party disclosure, the situation would ever have come to light. Given that social media companies provide, on a commercial basis, access to vast amounts of personal information, it is important to ask what, if any, proactive measures they take to ensure that developers comply with their TOS. Does enforcement only take place when there is a public relations disaster? If so, what other unauthorized exploitations of personal information are occurring without our knowledge or awareness? And should platform companies that are sources of huge amounts of personal information be held to a higher standard of responsibility when it comes to their commercial dealing with this personal information?

Different countries have different data protection laws, so in this instance I will focus on Canadian law, to the extent that it applies. Indeed, the federal Privacy Commissioner has announced that he is looking into Facebook’s conduct in this case. Under the Personal Information Protection and Electronic Documents Act (PIPEDA), a company is responsible for the personal information it collects. If it shares those data with another company, it is responsible for ensuring proper limitations and safeguards are in place so that any use or disclosure is consistent with the originating company’s privacy policy. This is known as the accountability principle. Clearly, in this case, if the data of Canadians was involved, Facebook would have some responsibility under PIPEDA. What is less clear is how far this responsibility extents. Clause 4.1.3 of Schedule I to PIPEDA reads: “An organization is responsible for personal information in its possession or custody, including information that has been transferred to a third party for processing. The organization shall use contractual or other means to provide a comparable level of protection while the information is being processed by a third party.” [My emphasis]. One question, therefore, is whether it is enough for Facebook to simply have in place a contract that requires its developers to respect privacy laws, or whether Facebook’s responsibility goes further. Note that in this case Facebook appears to have directed Cambridge Analytica to destroy all improperly collected data. And it appears to have cut Cambridge Analytica off from further access to its data. Do these steps satisfy Facebook’s obligations under PIPEDA? It is not at all clear that PIPEDA places any responsibilities on organizations to actively supervise or monitor companies with which it has shared data under contract. It is fair to ask, therefore, whether in cases where social media platforms share huge volumes of personal data with developers, is the data-sharing framework in PIPEDA sufficient to protect the privacy interests of the public.

Another interesting question arising from the scandal is whether what took place amounts to a data breach. Facebook has claimed that it was not a data breach – from their perspective, this is a case of a developer that broke its contract with Facebook. It is easy to see why Facebook would want to characterize the incident in this way. Data breaches can bring down a whole other level of enforcement, and can also give rise to liability in class action law suits for failure to properly protect the information. In Canada, new data breach notification provisions (which have still not come into effect under PIPEDA) would impose notification requirements on an organization that experienced a breach. It is interesting to note, though, that he data breach notification requirements are triggered where there is a “real risk of significant harm to an individual” [my emphasis]. Given what has taken place in the Cambridge Analytical scandal, it is worth asking whether the drafters of this provision should have included a real risk of significant harm to the broader public. In this case, the personal information was used to subvert democratic processes, something that is a public rather than an individual harm.

The point about public harm is an important one. In both the Geofeedia and the Cambridge Analytica scandals, the exploitation of personal information was on such a scale and for such purposes that although individual privacy may have been compromised, the greater harms were to the public good. Our data protection model is based upon consent, and places the individual and his or her choices at its core. Increasingly, however, protecting privacy serves goals that go well beyond the interests of any one individual. Not only is the consent model broken in an era of ubiquitous and continuous collection of data, it is inadequate to address the harms that come from improper exploitation of personal information in our big data environment.

Published in Privacy
<< Start < Prev 1 2 3 4 5 6 7 Next > End >>
Page 5 of 7

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law