Teresa Scassa - Blog

Displaying items by tag: Privacy

Last week I wrote about a very early ‘finding’ under Canada’s Personal Information Protection and Electronic Documents Act which raises some issues about how the law might apply in the rapidly developing big data environment. This week I look at a more recent ‘finding’ – this time 5 years old – that should raise red flags regarding the extent to which Canada’s laws will protect individual privacy in the big data age.

In 2009, the Assistant Privacy Commissioner Elizabeth Denham (who is now the B.C. Privacy Commissioner) issued her findings as a result of an investigation into a complaint by the Canadian Internet Policy and Public Interest Clinic into the practices of a Canadian direct marketing company. The company combined information from different sources to create profiles of individuals linked to their home addresses. Customized mailing lists based on these profiles were then sold to clients looking for individuals falling within particular demographics for their products or services.

Consumer profiling is a big part of big data analytics, and today consumer profiles will draw upon vast stores of personal information collected from a broad range of online and offline sources. The data sources at issue in this case were much simpler, but the lessons that can be learned remain important.

The respondent organization used aggregate geodemographic data, which it obtained from Statistics Canada, and which was sorted according to census dissemination areas. This data was not specific to particular identifiable individuals – the aggregated data was not meant to reveal personal information, but it did give a sense of, for example, distribution of income by geographic area (in this case, by postal code). The company then took name and address information from telephone directories so as to match the demographic data with the name and location information derived from the directories. Based on the geo-demographic data, assumptions were made about income, marital status, likely home-ownership, and so on. The company also added its own assumptions about religion, ethnicity and gender based upon the telephone directory information – essentially drawing inferences based upon the subscribers’ names. These assumptions were made according to ‘proprietary models’. Other proprietary models were used to infer whether the individuals lived in single or multi-family dwellings. The result was a set of profiles of named individuals with inferences drawn about their income, ethnicity and gender. CIPPIC’s complaint was that the respondent company was collecting, using and disclosing the personal information of Canadians without their consent.

The findings of the Assistant Privacy Commissioner (APC) are troubling for a number of reasons. She began by characterizing the telephone directory information as “publicly available personal information”. Under PIPEDA, information that falls into this category, as defined by the regulations, can be collected, used and disclosed without consent, so long as the collection, use and disclosure are for the purposes for which it was made public. Telephone directories fall within the Regulations Specifying Publicly Available Information. However, the respondent organization did more than simply resell directory information.

Personal information is defined in PIPEDA as “information about an identifiable individual”. The APC characterized the aggregate geodemographic data as information about certain neighborhoods, and not information about identifiable individuals. She stated that “the fact that a person lives in a neighborhood with certain characteristics” was not personal information about that individual.

The final piece of information associated with the individuals in this case was the set of assumptions about, among other things, religion, ethnicity and gender. The APC characterized these as “assumptions”, rather than personal information – after all, the assumptions might not be correct.

Because the respondent’s clients provided the company with the demographic characteristics of the group it sought to reach, and because the respondent company merely furnished names and addresses in response to these requests, the APC concluded that the only personal information that was collected, used or disclosed was publicly available personal information for which consent was not required. (And, in case you are wondering, allowing people to contact individuals was one of the purposes for which telephone directory information is published – so the “use” by companies of sending out marketing information fell within the scope of the exception).

And thus, by considering each of the pieces of information used in the profile separately, the respondent’s creation of consumer profiles from diffuse information sources fell right through the cracks in Canada’s data protection legislation. This does not bode well for consumer privacy in an age of big data analytics.

The most troubling part of the approach taken by the APC is that which dismisses “assumptions” made about individuals as being merely assumptions and not personal information. Consumer profiling is about attributing characteristics to individuals based on an analysis of their personal information from a variety of sources. It is also about acting on those assumptions once the profile is created. The assumptions may be wrong, the data may be flawed, but the consumer will nonetheless have to bear the effects of that profile. These effects may be as minor as being sent advertising that may or may not match their activities or interests; but they could be as significant as decisions made about entitlements to certain products or services, about what price they should be offered for products or services, or about their desirability as a customer, tenant or employee. If the assumptions are not “actual” personal information, they certainly have the same effect, and should be treated as personal information. Indeed, the law accepts that personal information in the hands of an organization may be incorrect (hence the right to correct personal information), and it accepts that opinions about an individual constitute their personal information, even though the opinions may be unfair.

The treatment of the aggregate geodemographic information is also problematic. On its own, it is safe to say that aggregate geodemographic information is information about neighborhoods and not about individuals. But when someone looks up the names and addresses of the individuals living in an area and matches that information to the average age, income and other data associated with their postal codes, then they have converted that information into personal information. As with the ethnicity and gender assumptions, the age, income, and other assumptions may be close or they may be way off base. Either way, they become part of a profile of an individual that will be used to make decisions about that person. Leslie O’Keefe may not be Irish, he may not be a woman, and he may not make $100,000 a year – but if he is profiled in this way for marketing or other purposes, it is not clear why he should have no recourse under data protection laws.

Of course, the challenged faced by the APC in this case was how to manage the ‘balance’ set out in s. 3 of PIPEDA between the privacy interests of individuals and the commercial need to collect, use and disclose personal information. In this case, to find that consent – that cornerstone of data protection laws – was required for the use and disclosure of manufactured personal information, would be to hamstring an industry built on the sale of manufactured personal information. As the use – and the sophistication – of big data and big data analytics advances, organizations will continue to insist that they cannot function or compete without the use of massive stores of personal information. If this case is any indication, decision makers will be asked to continue to blur and shrink the edges of key concepts in the legislation, such as “consent” and “personal information”.

The PIPEDA complaint in this case dealt with relatively unsophisticated data used for relatively mundane purposes, and its importance may be too easily overlooked as a result. But how we define personal information and how we interpret data protection legislation will have enormous importance as to role of big data analytics in our lives continues to grow. Both this decision and the one discussed last week offer some insights into how Canada’s data protection laws might be interpreted or applied – and they raise red flags about the extent to which these laws are adequately suited to protecting privacy in the big data era.

Published in Privacy

A long past and largely forgotten ‘finding* from the Office of the Privacy Commissioner of Canada offers important insights into the challenges that big data and big data analytics will pose for the protection of Canadians’ privacy and consumer rights.

13 years ago, former Privacy Commissioner George Radwanski issued his findings on a complaint that had been brought against a bank. The complainant had alleged that the bank had wrongfully denied her access to her personal information. The requirement to provide access is found in the Personal Information Protection and Electronic Documents Act (PIPEDA). The right of access also comes with a right to demand the correction of any errors in the personal information in the hands of the organization. This right is fundamentally important, not just to privacy. Without access to the personal information being used to inform decision-making, consumers have very little recourse of any kind against adverse or flawed decision-making.

The complainant in this case had applied for and been issued a credit card by the bank. What she sought was access to the credit score that had been used to determine her entitlement to the card. The bank had relied upon two credit scores in reaching its decision. The first was the type produced by a credit reporting agency – in this case, Equifax. The second was an internal score generated by the bank using its own data and algorithm. The bank was prepared to release the former to the complainant, but refused to give her access to the latter. The essence of the complaint, therefore, was whether the bank had breached its obligations under PIPEDA to give her access to the personal information it held about her.

The Privacy Commissioner’s views on the interpretation and application of the statute in this case are worth revisiting 13 years later as big data analytics now fuel so much decision-making regarding consumers and their entitlement to or eligibility for a broad range of products and services. Credit reporting agencies are heavily regulated to ensure that decisions about credit-worthiness are made fairly and equitably, and to ensure that individuals have clear rights to access and to correct information in their files. For example, credit reporting legislation may limit the types of information and the data sources that may be used by credit reporting agencies in arriving at their credit scores. But big data analytics are now increasingly relied upon by all manner of organizations that are not regulated in the same way as credit-reporting agencies. These analytics are used to make decisions of similar importance to consumers – including decisions about credit-worthiness. There are few limits on the data that is used to fuel these analytics, nor is there much transparency in the process.

In this case, the bank justified its refusal to disclose its internal credit score on two main grounds. First, it argued that this information was not “personal information” within the meaning of PIPEDA because it was ‘created’ internally and not collected from the consumer or any other sources. The bank argued that this meant that it did not have to provide access, and that in any event, the right of access was linked to the right to request correction. The nature of the information – which was generated based upon a proprietary algorithm – was such that was not “facts” that could be open to correction.

The argument that generated information is not personal information is a dangerous one, as it could lead to a total failure of accountability under data protection laws. The Commissioner rejected this argument. In his view, it did not matter whether the information was generated or collected; nor did it matter whether it was subject to correction or not. The information was personal information because it related to the individual. He noted that “opinions” about an individual were still considered to be personal information, even though they are not subject to correction. This view of ‘opinions’ is consistent with subsequent findings and decisions under PIPEDA and comparable Canadian data protection laws. Thus, in the view of the Commissioner, the bank’s internally generated credit score was the complainant’s personal information and was subject to PIPEDA.

The bank’s second argument was more successful, and is problematic for consumers. The bank argued that releasing the credit score to the complainant would reveal confidential commercial information. Under s. 9(3)(b) of PIPEDA, an organization is not required to release personal information in such circumstances. The bank was not arguing so much that the complainant’s score itself was confidential commercial information; rather, what was confidential were the algorithms used to arrive at the score. The bank argued that these algorithms could be reverse-engineered from a relatively small sample of credit scores. Thus, a finding that such credit scores must be released to individuals would leave the bank open to the hypothetical situation where a rival might organize or pay 20 or so individuals to seek access to their internally generated credit scores in the hands of the bank, and that set of scores could then be used to arrive at the confidential algorithms. The Commissioner referred this issue to an expert on algorithms and concluded that “although an exact determination of a credit-scoring model was difficult and highly unlikely, access to customized credit scores would definitely make it easier to approximate a bank’s model.”

The Commissioner noted that under s. 9(3)(b) there has to be some level of certainty that the disclosure of personal information will reveal confidential commercial information before disclosure can be refused. In this case, the Commissioner indicated that he had “some difficulty believing that either competitors or rings of algorithmically expert fraud artists would go to the lengths involved.” He went on to say that “[t]he spectre of the banks falling under systematic assault from teams of loan-hungry mathematicians is simply not one I find particularly persuasive.” Notwithstanding this, he ruled in favour of the bank. He noted that other banks shared the same view as the respondent bank, and that competition in the banking industry was high. Since he had found it was technically possible to reverse-engineer the algorithm, he was of the view that he had to find that the release of the credit score would reveal confidential commercial information. He was satisfied with the evidence the bank supplied to demonstrate how closely guarded the credit-scoring algorithm was. He noted that in the UK and Australia, relatively new guidelines required organizations to provide only general information regarding why credit was denied.

The lack of transparency of algorithms used in the big data environment becomes increasingly problematic the more such algorithms are used. Big data analytics can be used to determine credit-worthiness – and such these determinations are made not just by banks but by all manner of companies that extend consumer credit through loans, don’t-pay-for-a-year deals, purchase-by-installment, store credit cards, and so on. They can also be used to determine who is entitled to special offers or promotions, for price discrimination (where some customers are offered better prices for the same products or services), and in a wide range of other contexts. Analytics may also be used by prospective employers, landlords or others whose decisions may have important impacts on people’s lives. Without algorithmic transparency, it might be impossible to know whether the assumptions, weightings or scoring factors are biased, influenced by sexism or racism (or other discriminatory considerations), or simply flawed.

There may be some comfort to be had that in this case the Commissioner was allowed to have access to the scoring model used. He stated that he found it innocuous – although it is not clear what kind of scrutiny he gave it. After all, his mandate extended only to decisions relating to the management of personal information, and did not extend to issues of discrimination. It is also worth noting that the Commissioner seems to suggest that each case must be decided on its own facts, and that what the complainant stood to gain and the respondent stood to lose were relevant considerations. In this case, the complainant had not been denied credit, so in the Commissioner’s view there was little benefit to her in the release of the information to be weighed against the potential harm to the bank. Nevertheless, the decision raises a red flag around transparency in the big data context.

In the next week or so I will be posting a ‘Back to the Future II’ account of another, not quite so old, PIPEDA finding that is also significant in the big data era. Disturbingly, this decision eats away at Commissioner Radwanski’s conclusion on the issue of “personal information” as it relates to generated or inferred information about individuals. Stay tuned!



* Because the Privacy Commissioner of Canada has no order-making powers, he can only issue “findings” in response to complaints filed with the office. The ‘findings’ are essentially opinions as to how the act applies in the circumstances of the complaint. If the complaint is considered well-founded, the Commissioner can also make recommendations as to how the organization should correct these practices. For binding orders or compensation the complainant must first go through the complaints process and then take the matter to the Federal Court. Few complainants do so. Thus, while findings are non-binding and set no precedent, they do provide some insight into how the Commissioner would interpret and apply the legislation.

 

Published in Privacy

In the fall of 2014, Quebec’s Commission d’accès à l’information, which is responsible for overseeing Quebec’s private sector data protection legislation, ruled that the province’s law applied to Rogers Communications Inc., a federally regulated company. The company had been the subject of a complaint that it had violated Quebec’s data protection law when it required new cellular phone subscribers to provide two pieces of identification, and then recorded the identification numbers on the furnished ID documents. Administrative Judge Lina Desbiens ruled that the complaint was well-founded. In her view, while it was legitimate to ask to see identification for the purposes of establishing the identity of the client, it was not necessary to record the identification numbers. Further, she found that the document ID numbers were not necessary for the purposes of carrying out a credit check – other information would suffice for this purpose.

The issue of the application of the Quebec data protection statute is the more interesting part of this decision. Because Rogers is part of the federally-regulated telecommunications industry, the federal Personal Information Protection and Electronic Documents Act (PIPEDA) applies to its activities. Certainly there have been plenty of cases in which PIPEDA has been applied to Rogers or to its sister telecommunications companies.[1] From Rogers’ point of view, if the federal Act applied, then the provincial statute did not. Judge Desbiens disagreed. She noted that

s. 81 of the Act Respecting the Protection of Personal Information in the Private Sector gave the Commission jurisdiction over “any matter relating to the protection of personal information as well as into the practices of a person who carries on an enterprise and who collects, holds, uses or communicates such information to third persons.” She read this to mean that the Commission’s jurisdiction extended to the collection, use or disclosure of personal information by any business operating in Quebec. Since Rogers operated its business in Quebec, it was thus subject to the provincial law. Although the federal law might also apply to Rogers, Judge Desbiens found that it would only apply to the exclusion of the provincial law where the application of the provincial law would affect, in some significant way, the exercise of federal jurisdiction. In this case, she observed, Rogers was a telecommunications company, but the decision as to what pieces of identification it could require from new customers and what information it could record was not something that would affect in any way federal jurisdiction over telecommunications.

Judge Desbiens cited in support of her position several other decisions of the Commission d’accès à l’information in which the Quebec legislation was applied to companies in federally regulated industries. Notably, however, the facts addressed in these decisions predated the coming into effect of PIPEDA. Judge Desbiens also cited the more recent case of Nadler c. Rogers Communications Inc.. This case involved a civil suit for breach of privacy, and while the court considers the Quebec private sector data protection statute in its reasons, no argument appears to have been made regarding jurisdictional issues.

Judge Desbiens’ ultimate conclusion was that it was possible for a company to comply with both federal and provincial statutes by satisfying the stricter of the two sets of norms. In any event, she expressed the view her decision on the merits and the position of the federal Privacy Commissioner on similar issues did not diverge.[2]

The decision that both federal and provincial data protection statutes apply to federally regulated companies doing business in Quebec seems problematic. On the one hand, federally regulated companies are frequently subject to provincial laws in some of their day-to-day business activities. This is why, for example, some banking products or services are not available in all provinces. Arguably, therefore, it should not matter that a federally-regulated company be required to comply with provincial data protection norms. However, the situations are not equivalent. In the case of personal information, the federal government has provided a national scheme that specifically applies to federally regulated businesses. While Judge Desbiens is most likely correct that there would be little difference in the outcome of this case under PIPEDA, it should not necessarily be assumed that this would be the so on a different set of facts. And, while it is true that the data protection decision in this case does not interfere with federal jurisdiction over telecommunications, it does seem clearly to trench upon federal jurisdiction over data protection in the federally regulated private sector.

 



[1] For just a few examples, see: Kollar v. Rogers Communications Inc., 2011 FC 452, http://www.canlii.org/en/ca/fct/doc/2011/2011fc452/2011fc452.pdf; Buschau v. Rogers Communications Inc., 2011 FC 911, http://www.canlii.org/en/ca/fct/doc/2011/2011fc911/2011fc911.pdf; Johnson v. Bell Canada, [2009] 3 FCR 67, 2008 FC 1086; Henry v. Bell Mobility, 2014 FC 555.

[2] The Commission cited several documents published on the website of the Office of the Privacy Commissioner of Canada. These include: Collection of Drivers’ Licence Numbers Under Private Sector Privacy Legislation, https://www.priv.gc.ca/information/pub/guide_edl_e.asp; Best Practices for the Use of Social Insurance Numbers in the Private Sector, https://www.priv.gc.ca/resource/fs-fi/02_05_d_21_e.asp; and Photo Identification Guidance, https://www.priv.gc.ca/resource/fs-fi/02_05_d_34_tips_e.asp.

Published in Privacy

Class action law suits for breach of privacy are becoming increasingly common in Canada. For example, the B.C. Supreme Court, the Ontario Superior Court, and Newfoundland and Labrador Supreme Court have all recently certified class action law suits in relation to alleged privacy breaches.

The use of the class action law suit can be a useful solution to some of the problems that plague the victims of privacy breaches. These difficulties include:

1) The lack of any other meaningful and effective recourse for a large scale privacy breach. Complaints regarding a large-scale privacy breach by a private sector corporation can be made to the Privacy Commissioner of Canada under the Personal Information Protection and Electronic Documents Act (PIPEDA) (or to his provincial counterparts in B.C., Quebec or Alberta, depending upon the nature of the corporation and its activities). However, the federal privacy commissioner can only investigate and issue a report with non-binding recommendations. He has no order-making powers. Further, there is no power to award damages. An individual who feels they have been harmed by a privacy breach must, after receiving the Commissioner’s report, make an application to Federal Court for compensation. Damage awards in Federal Court under PIPEDA have been very low, ranging from about $0 to $5000 (with a couple of outlier exceptions). This amount of damages will not likely compensate for the time and effort required to bring the legal action, let alone the harm from the privacy breach. Perhaps more importantly, a few thousand dollars may not be a significant deterrent for companies whose practices have led to the privacy breach. The Privacy Commissioner’s Office has called for reform of PIPEDA to include order making powers, and to give the Commissioner the authority to impose significant fines on companies whose conduct leads to significant privacy harms. Yet legislative reform in this area does not seem to be on the current government’s agenda.

2) The problem of establishing damages in privacy cases. It can be very difficult to establish damages in cases where privacy rights have been breached. For example, although a company’s data breach might affect tens or even hundreds of thousands of individuals, it may be very difficult for any of those individuals to show that the data breach has caused them any actual harm. Even if one or more of these individuals suffers identity theft, it may be impossible to link this back to that particular data breach. While all of the affected individuals may suffer some level of anxiety over the security of their personal information, it is hard to put a dollar value on this kind of anxiety – and courts have tended to take a rather conservative view in evaluating such harm. It simply might not be worth it for any individual to bring legal action in such circumstances – even if they were to succeed, their damages would likely not even come close to making the litigation worth their while.

3) The inaccessibility of justice on an individual scale. Frankly, the majority of Canadians are not in a financial position to take anyone to court for breach of privacy. (Those in province of Quebec might be slightly better off in this regard, as privacy rights are much clearer and better established in private law in that province than they are elsewhere in Canada). It should be noted that those few individuals who have sought damages in Federal Court for PIPEDA breaches have been self-represented – legal representation would simply be too costly given the stakes. A suit for the tort of invasion of privacy or for breach of a statutory privacy tort would be considerably more complex than an application for damages under PIPEDA. Damage awards in privacy cases are so low that litigation is not a realistic solution for most.

In this context it is not surprising that the class action law suit for breach of privacy is catching on in Canada. Such law suits allow large numbers of affected individuals to seek collective recourse. As mentioned earlier, the British Columbia Supreme Court recently certified a class action law suit against Facebook for breach of privacy rights protected under British Columbia’s Privacy Act. The claim in Douez v. Facebook, Inc. related to Facebook’s Sponsored Stories “product”. Advertisers who paid to make use of this product could use the names and likenesses of Facebook users in “sponsored stories” about their products or services. These “sponsored stories” would then be sent to the contacts of the person featured in the story. The court found that between September 9, 2012 and March 10, 2013, 1.8 million B.C. residents were featured in Sponsored Stories. The plaintiffs argued that this practice violated their privacy. Although the issues have not yet been litigated on their merits, the certification of the class action law suit allows the privacy claims to proceed on behalf of the significant number of affected individuals.

In Evans v. Bank of Nova Scotia, Justice Smith of the Ontario Superior Court of Justice certified a class action law suit against the Bank of Nova Scotia. In that case, an employee of the bank had, over almost a five year period, accessed highly confidential personal banking information of 643 customers. In June of 2012, the Bank notified these customers that there may have been unauthorized access to their banking information; 138 of these individuals later informed the bank that they were victims of identity theft or fraud. The bank employee subsequently admitted that he had channelled the banking information through his girlfriend to individuals who sought to use the information for illegal purposes. The lawsuit claims damages for invasion of privacy and negligence, among other things, and argues that the bank should be held vicariously liable for the actions of its employee.

Most recently, in Hynes v. Western Regional Integrated Health Authority, the Newfoundland and Labrador Supreme Court certified a class action law suit against the Health Authority after it was discovered that an employee had improperly accessed 1,043 medical records without authorization. The information accessed included name and address information, as well as information about diagnostic and medical procedures at the hospital. This case is an example of where it may be difficult to assess or quantify the harm suffered by the particular individuals as a result of the breach, as it is not known how the information may have been used. The plaintiffs argued that both the statutory privacy tort in Newfoundland and the common law tort of intrusion upon seclusion were applicable, and that the Health Authority should be held vicariously liable for the acts of its employee. The also argued that the Health Authority had been negligent in its care of their personal information. The court found that the arguments raised met the necessary threshold at the class action certification stage – the merits remain to be determined once the case ultimately proceeds to trial.

What these three cases demonstrate is that class action law suits may give individuals a useful recourse in cases where data breaches have exposed their personal information and perhaps left them vulnerable to identify theft or other privacy harms. Such law suits may also act as a real incentive for companies to take privacy protection seriously. The cost of defending a class action law suit, combined with the possibility of a very substantial damages award (or settlement), and the potential reputational harm from high profile litigation, all provide financial incentives to properly safeguard personal information.

This may be welcome news for those who are concerned about what seems to be a proliferation of data breaches. It should not, however, let the federal government off the hook in terms of strengthening Canada’s private sector data protection legislation and giving the Privacy Commissioner more effective tools to act in the public interest to protect privacy by ensuring compliance with the legislation.

 

Published in Privacy

Just over a year ago, in Information and Privacy Commissioner of Alberta v. United Food and Commercial Workers, Local 401 the Supreme Court of Canada struck down Alberta’s Personal Information Protection Act (PIPA) on the basis that it violated the freedom of expression guaranteed by s. 2(b) of the Canadian Charter of Rights and Freedom. The case arose after a union was found to have violated PIPA by collecting and using video and photo images of people crossing its picket lines in the course of a labour dispute without the consent of those individuals. The union was ultimately successful in its arguments that the limitations on the collection, use and disclosure of personal information without consent contained in PIPA violated their freedom of expression. (You can read more about this decision in my early blog post here).

As a remedy, the Supreme Court of Canada struck down the entire statute, but put in place a suspension of invalidity for a period of year. This amount of time was considered reasonable for the Alberta legislature to amend the legislation to bring it into conformity with the Charter. The year passed without legislative action, and at the last minute the government scrambled to obtain an extension. The Court granted a six month extension on October 30, 2014.

The Alberta government has now introduced a bill to amend PIPA to bring it into conformity with the Charter. Bill 3 is framed in fairly narrow terms. In essence, it creates a new exception to the general rule that there can be no collection, use or disclosure of personal information without consent. This exception is specifically for trade unions. The collection, use or disclosure without consent is permissible if it is “for the purpose of informing or persuading the public about a matter of significant public interest or importance relating to a labour relations dispute involving the trade union” (proposed new sections 14.1, 17.1, and 20.1). The information collected, used or disclosed must be “reasonably necessary” for that purpose, and, in the circumstances, it must be reasonable to collect, use or disclose that information without consent.

The new provisions attempt to strike a balance between the right to privacy and the freedom of expression of trade unions. While it will now be permissible to collect, use or disclose personal information without consent in the context of a labour dispute, there is no blank cheque. Rather than exempt trade unions from the application of PIPA altogether, the new provisions set out the circumstances in which unions may act, and these actions will be under the supervision of the Office of the Information and Privacy Commissioner (OIPC). A person whose information is collected, used or disclosed without their consent by a union may still complain to the OIPC; the OIPC will get to determine if the union’s purpose was to inform or persuade the public “about a matter of significant public interest or importance relating to a labour relations dispute involving the trade union” This wording is interesting – actions by a trade union taken in support of another trade union may not qualify, nor may actions carried out by a trade union to protest a government’s policies. Further, an adjudicator might decide that the information was collected, used or disclosed in relation to a matter that was not of significant public interest or importance. Whether this exception strikes the right balance is an open question which may arise in the course of some future dispute.

The issue of the balance between the freedom of expression and privacy is an extremely interesting one, and it arises in other contexts under private sector data protection legislation. These competing rights are purportedly balanced, for example, by provisions that exempt journalistic, artistic and literary endeavors from the application of the statute in certain circumstances. However, as the United Food case demonstrates, these exceptions do not necessarily capture all of the actors who may have information of public interest that they wish to communicate. A few years ago I wrote an article about the “journalistic purposes” exception that is found in Alberta’s PIPA, as well as in B.C.’s Personal Information Protection Act and the federal Personal Information Protection and Electronic Documents Act. I argue that this exception may not strike the right balance between the right to journalistic freedom of expression and privacy. In the first place, it is not clear who is meant to be entitled to the exception (what are journalistic, artistic or literary purposes, and who gets to assert them?) Secondly, the exceptions are structured so that once it is decided that the acts in question fall within the exception, there can be no oversight to determine whether the manner in which the personal information was collected, used or disclosed went beyond what was reasonable for the legitimate information of the public.

Although the United Food saga may be approaching its close, the issues around the balance between freedom of expression and privacy are far from being resolved. Expect to see these issues surfacing in cases arising under private sector data protection legislation (as was the case with United Food) as well as in other privacy contexts as well.

Note: I recently posted about a privacy law suit that raised freedom of expression issues. It can be found here.

 

Published in Privacy

In an interesting decision from the small claims court of Quebec, Google has been held liable for violating the plaintiff’s privacy rights after an image of her sitting on her front steps appeared on Google Streetview.

In Grillo v. Google Inc., the plaintiff, Ms Grillo, testified that she had decided to sit on her front steps briefly one day, while on vacation. She was checking her messages on her smart phone, when she noticed the Google Car driving by, with its mounted camera. It was not until five months later that she first went online to look for her house on Streetview. She was shocked to see herself sitting barefoot and wearing a loose, sleeveless top which revealed part of one of her breasts. Also visible in the image was her car, with the licence plate unblurred, and the civic number of her house.

The plaintiff testified that she was a very private person, and, in fact, had chosen to live where she did because it was a relatively private and untraveled area of the city. After she found the image on Streetview, she testified that she was the butt of a number of jokes at the bank where she worked; her partially exposed breast was particularly commented upon by her co-workers. She testified as to her sense of shame and embarrassment. She immediately complained to the Office of the Privacy Commissioner of Canada, which suggested that she contact Google in order to have them remove the images. She claimed that she tried to do this, using the features available on the Streetview site, but without success. Shortly afterwards, she claimed to have sent two copies of a letter to Google – one to its offices in Washington D.C., and one to its corporate headquarters in California, setting out her concerns, and specifically requesting that her licence plate information be removed. Google claimed never to have received either copy of this letter. Approximately two years later, Ms Grillo sought the assistance of a lawyer, and sent a letter to Google demanding that “all photographs of our client, her breast, her car’s license plate and her civic address” be blurred or removed. The letter also claimed damages in the amount of $45,000. A short time after this letter was received, Google notified Ms Grillo’s lawyer that the images had been blurred.

Ms Grillo initiated a law suit against Google to recover damages related to the display of the images. However, perhaps because she was unrepresented at this point, she initiated her action in Quebec’s small claims court. Because this court has jurisdiction only over claims of $7000 or less, she limited her damage claim to this amount. In terms of the damages she claimed to have suffered, she noted that she had been mocked and humiliated at work, and had left her job at the bank as a result. She had also been on an extended period of sick leave prior to resigning her position – this was due to depression for which she was receiving care. She emphasized that she was a very private person who preferred her anonymity, and who had made choices about where to live and what kind of online activity to engage in (or not) with a view to this desire for privacy and anonymity.

The legal basis for the claim of violation of privacy rights in this case is found both in the Civil Code of Quebec and the Quebec Charter of Human Rights and Freedoms. The Civil Code sets out a right to privacy and identifies a series of acts that are considered to violate that right. One of these is the use of a person’s name or image without their consent for any purpose other than the legitimate information of the public. The Quebec Charter also sets out a right to privacy and to human dignity.

The leading case in Quebec on the right to privacy as it relates to the use of a person’s image is Aubry v. ÉditionsVice-Versa. In this case, the Supreme Court of Canada awarded damages after a magazine published a photograph of a young woman sitting outside on her front steps. The photograph had been taken without her knowledge or consent. Drawing on this decision, Justice Breault explained that a photograph taken of a person in a public space in Quebec could not be circulated without that person’s consent unless the public’s legitimate right to information prevailed over the right to privacy. He noted that in Quebec, the freedom of expression did not trump privacy rights; the two considerations must necessarily be balanced.

In this case, Google argued that Ms Grillo had been sitting on her front steps in plain view of her neighbors or of any passersby. Since she was in public view, it argued, she had no right to privacy. Justice Breault disagreed. He rejected the idea that there was a strict dichotomy between public and private spaces. In this case, he noted that Ms Grillo lived on a quiet street, and that the relative level of privacy on that street was something that was of importance to her. Further, she was not engaged in any sort of public activity: she was on vacation, sitting outside her home. She was entitled to expect that her privacy and her right to control her image would not be infringed by the taking and distribution of a photo without her consent.

Google also argued that Ms Grillo was not identifiable from the photograph because her face had been blurred. However, the court found that the other details in the photograph made her identifiable, and that these other details were, as a result, also “personal information”. Justice Breault noted in particular that the photograph showed her car licence plate, and her house number – details Google admitted had been missed by its blurring algorithm.

Finally, Google argued that the dissemination of the photograph without Ms Grillo’s consent could be justified as it was for the “legitimate information of the public”. In this respect, it argued that its Streetview service was of broad use and interest to the public. Justice Breault rejected “social utility” as a basis for justifying a breach of privacy. It was not enough to argue that Streetview in general served a useful public purpose; it was necessary to show that there was a dominant public interest in the circulation of the plaintiff’s image – an interest that would outweigh the plaintiff’s privacy interest. The court found that no such public interest existed in this case. Thus, Justice Breault concluded that the plaintiff’s right to privacy had been violated.

In considering the amount of damages to award, Justice Breault found that Ms Grillo had not adequately established the extent to which her image had actually been viewed by members of the public. He assumed that the number of viewers would be relatively low, and limited mainly to friends and co-workers. He also found that she had not established a causal relationship between the dissemination of her image on Streetview and the depression that she had suffered. He noted that she had produced no witnesses as to the state of her health. Justice Breault also found it significant that she had waited two years between the time she had discovered the image and the time that she had sent the lawyer’s letter to Google. He noted that the images had been blurred immediately after Google’s receipt of the letter, suggesting that she could have mitigated any harm she suffered by acting much sooner. Nevertheless, Justice Breault accepted that she had been deeply shocked by the publication of the image and that she had been hurt as well by the comments of her co-workers. He awarded her $2250 in damages, along with costs of $159.

Published in Privacy

Do you have a reasonable expectation of privacy in the data recorded by your car’s airbag sensing diagnostic module (SDM)? Did you even know your car has an SDM? Two recent court cases highlight important privacy issues related to this technology – and by extension to technology embedded into other consumer products that is capable of recording user information.

Both R. v. Hamilton from the Ontario Supreme Court and R. v. Fedan from the BC Supreme Court are cases involving automobile accidents where police extracted, without a warrant, data recorded on the “black box” associated with vehicle airbag systems. These little ‘black boxes’ are referred to alternatively as sensing diagnostic modules (SDMs) or airbag control modules (ACMs). The devices are installed in cars along with the airbag system. Their recording function is triggered by the sudden deceleration that precedes the deployment of the airbags, and they typically record only a few seconds of data leading up to impact.

It is a violation of s. 8 of the Canadian Charter of Rights and Freedoms for police to conduct a search without a warrant in circumstances where there is a reasonable expectation of privacy. Thus, a key issue in these airbag cases was whether there was a reasonable expectation of privacy in the SDM data, and, in consequence, whether the police should have obtained warrants prior to seizing the devices and extracting the data.

The two courts reached opposite conclusions on this issue. Justice MacDougall of the Ontario Supreme Court found that the accused had a reasonable expectation of privacy in the data, and that his Charter rights were violated when the data was extracted without a warrant. This court found that the SDM was similar to a computer that recorded information about its user. By contrast, Justice Kloegman of the British Columbia Supreme Court found that there was no reasonable expectation of privacy in the SDM data.

The BC Court found that the driver had no reasonable expectation of privacy in the recorded data largely because he did not know that his car was equipped to record such data. As Justice Kloegman explained: “SDMs are a relatively new feature of motor vehicles and it is unlikely that the majority of drivers even know their vehicle is equipped with one or what it does.” (at para 22). In fact, the judge was prepared to distinguish Hamilton on this point – in Hamilton, the accused was an off-duty police officer who knew about such devices, and therefore could be found to have a reasonable expectation of privacy. However, for the court to base a reasonable expectation of privacy on whether or not a consumer realizes that the product they have purchased is recording data about their use of it is hugely problematic, particularly as we move into an era where more and more of our consumer items are “smart”. A reasonable expectation of privacy in recorded data should not depend upon whether the individual knew that their car, fridge, phone, thermostat, or any other consumer item was programmed to record data about their use of the device. One might even argue that the lack of awareness that one’s use of consumer devices leaves a data trail should result in an enhanced expectation of privacy.

The court in Fedan also criticized the finding of the court in Hamilton that the SDM was a kind of onboard computer, thus aligning it with other computing devices in which a reasonable expectation of privacy has been found by the courts. In rejecting the analogy to a computer, Justice Kloegman observed that when there was a triggering event, the SDM would “capture five seconds of data regarding speed, brakes, and seatbelts.” (para 23) She then stated that this was “information generated by the vehicle, not the driver.”(para 23) This too is reasoning about which ordinary individuals should be concerned. This is not data about the vehicle in the abstract (grey, Volvo, 2010); rather, it is data that reveals how the driver was interacting with the vehicle at the time of the accident. The information is clearly information about the driver – as the court in Hamilton found.

In spite of the conclusion by the BC court that the information at issue was not about the driver, the judge did admit that the “driver’s actions in operating the vehicle will cause the SDM to engage.” (at para 23) Nevertheless, she found that this did not engage a privacy interest since “those same actions would likely be visible to the public eye.” (at para 23) This conclusion is based on older case law that finds that there is no reasonable expectation of privacy in events that take place in public view. However, in technology context, there is a much more nuanced understanding of what is publicly perceptible and what is not. Accidents can occur anywhere and in any conditions. In many circumstances, there will be no witnesses. Even where there are witnesses, eyewitness testimony is notoriously unreliable – and it is considerably less precise than technological records. Eyewitnesses, for example, will not be able to provide the very precise details recorded by an SDM regarding the speed of the vehicle, the extent of braking. It is worth noting that the court in Hamilton, found, by contrast that the data in an SDM “is of a qualitatively different type than what an observing member of the public could reasonably observe.” (at para 58)

The starkly different decisions in Hamilton and Fedan illustrate that there are privacy issues here that have yet to been conclusively resolved. The issue of the reasonable expectation of privacy in SDM data is one that is worth following as cases from other provinces in Canada start to emerge. The implications of judicial approaches go well beyond on-board vehicle data recorders and may extend to a wide range of consumer devices equipped with devices that can record even small snippets of data.

 

Published in Privacy

The Quebec Court of Appeal has released its decision in Trudeau c. AD4 Distribution Canada inc., a case that balances the freedom of expression with the protection of privacy and dignity. This is an increasingly important theme in privacy case law in Canada; it was at the heart of a recent Supreme Court of Canada decision, albeit in a different context.

In Trudeau, the appellant, Stéfanie Trudeau had launched a law suit against the defendant film company after they released a pornographic film that featured a caricature of her in her professional capacity as a Montreal police officer. She had sought an injunction to prevent the distribution of the film, as well as damages in the amount of $100,000. The film produced by the respondents was titled “728 Agente XXX”. It was described as a parody inspired by the conduct of police at the time of the 2012 Quebec student protests. Although the filmmakers did not use her name in the film, and did not hire an actress who resembled her, the character in the film wore her police badge number 728. The number was not chosen at random; the appellant had become notorious following the student protests. The Quebec Superior Court noted, in its decision, that Agent 728 had become famous almost overnight when a video of her pepper-spraying demonstrators circulated widely in both mainstream and social media. Her badge number was also featured a later point in time in a video shown on mainstream and social media that depicted her forcible arrest of a man caught drinking in public in the Plateau area of Montreal. She was at one point suspended from the police force and an internal inquiry was held.

Ms Trudeau claimed that the film violated her right to privacy and her dignity (protected under sections 4 and 5 of Quebec’s Charter of Human Rights and Freedoms), and that they had usurped her name and image in the making of the film. The trial judge had rejected these arguments. On September 12, 2014, the Quebec Court of Appeal upheld this decision. The Court of Appeal agreed with the trial judge that there had been no usurpation of the appellant’s name or image – her name was not used in the film, and the actress who portrayed agent 728 did not resemble her. Although her badge number was used, and although her badge number could be linked to her through the extensive media coverage of the events leading to her notoriety, the Quebec Court of Appeal agreed with the trial judge that this was not enough to give rise to liability. It had to be shown not just that there was a link, but that any link between the appellant and the film violated her right to privacy or her dignity. The trial judge had found that her badge number was not part of her private identity, but rather was part of her public persona as a law enforcement agent. As a result, the caricature or parody in the film was not about her personally, but about her public persona – one that had engaged in highly publicized and controversial acts. The Court of Appeal agreed that her actions as a police officer could legitimately be the object of caricature and critical comment. According to the Court, the right to make a parody such as the film in question falls within the respondents’ freedom of expression. The Court accepted that there are limits on the extent to which a public figure can be subject to parody, but that these limits were not exceeded in this case. Here, according to the Court, the ordinary citizen would not believe that it is the appellant herself that is depicted, in any personal way, but only an effigy. The court found the parody to be so unrealistic that it could not diminish the appellant, in her personal capacity, in the mind of the public.

The appellant also argued that the fact that the film was pornographic was itself a violation of her dignity. The Court of Appeal disagreed, noting that the case did not involve the use of an actual photograph of the appellant in a pornographic context without her consent. The Court confirmed that the pornographic nature of the film did not remove it from the category of parody or caricature – a form of commentary that is protected by the freedom of expression.

 

Published in Privacy

A year ago in November, the Supreme Court of Canada struck down Alberta’s Personal Information Protection Act (PIPA) on the basis that it violated freedom of expression guaranteed by s. 2(b) of the Canadian Charter of Rights and Freedom. It did so by not appropriately striking the balance between the rights of striking works to express themselves in the context of a labour dispute and the privacy rights of others. In Information and Privacy Commissioner of Alberta v. United Food and Commercial Workers, Local 401, an adjudicator under PIPA had ruled that the Union’s practice of taking photographs and videotapes of people crossing its picket line during a labour dispute – and of using some of the footage on its website – contravened the data protection legislation. (The case is discussed in more detail in an earlier blog post here). The Union countered (ultimately, successfully) that to require it to seek consent to the collection and use of this personal information would infringe its rights to freedom of expression.

Where legislation violates a Charter right, a court has various options. Here, both the Information and Privacy Commissioner of Alberta and the Attorney General of that province had asked the Court to strike down the legislation if it were found unconstitutional, rather than to perform judicial surgery on it. The Court agreed this was the better option, writing: “Given the comprehensive and integrated structure of the statute, we do not think it is appropriate to pick and choose among the various amendments that would make PIPA constitutionally compliant.” (at para 40). The Court added a one year period in which the declaration of the legislation’s invalidity was suspended. This would allow the law to remain operative in the province, giving the legislature what was clearly thought to be ample time to introduce the amendment or amendments necessary to bring the statute into compliance with the Charter.

A one-year suspension of invalidity might suffice where a government is functioning as its citizens have a right to expect. However in an age of increasingly dysfunctional governments the Charter remedy of striking down entire statutes with a one-year suspension of invalidity may be a riskier gambit. It has certainly proved to be so in this case. Recognizing that it cannot get amendment’s through by the November 15 deadline set by the Supreme Court of Canada, the Alberta Government as now asked the Court for an extension. The Court is likely to grant the extension – to do otherwise would result in a state of chaos in Alberta as far as private sector data protection is concerned.

Update Note:  On October 30, 2014 the Supreme Court of Canada agreed to a six month extension to the suspension of invalidity.

 

Published in Privacy
Wednesday, 02 July 2014 07:07

Privacy and Open Government

The public-oriented goals of the open government movement promise increased transparency and accountability of governments, enhanced citizen engagement and participation, improved service delivery, economic development and the stimulation of innovation. In part, these goals are to be achieved by making more and more government information public in reusable formats and under open licences. The Canadian federal government has committed to open government, and is currently seeking input on its implementation plan. The Ontario government is also in the process of developing an open government plan, and other provinces are at different stages of development of open government. Progress is also occurring at the municipal level across Canada, with notable open data and/or open government initiatives in Vancouver, Toronto, and Ottawa (to give a few examples).


Yet open government brings with it some privacy challenges that are not explicitly dealt with in existing laws for the protection of privacy. While there is some experience with these challenges in the access to information context (where privacy interests are routinely balanced against the goals of transparency and accountability (and see my posting on a recent Supreme Court of Canada decision on this issue), this experience may not be well adapted to developments such as open data and proactive disclosure, nor may it be entirely suited to the dramatic technological changes that have affected our information environment. In a recent open-access article, I identify three broad privacy challenges raised by open government. The first is how to balance privacy with transparency and accountability in the context of “public” personal information (for example, registry information that may now be put online and broadly shared). The second challenge flows from the disruption of traditional approaches to privacy based on a collapse of the distinctions between public and private sector actors. The third challenge is that of the potential for open government data—even if anonymized—to contribute to the big data environment in which citizens and their activities are increasingly monitored and profiled.

I invite you to have a look at this article, which is published in (2014) 6 Future Internet 397-413.

Published in Privacy
<< Start < Prev 11 12 13 14 15 16 17 18 Next > End >>
Page 13 of 18

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law