Teresa Scassa - Blog

Displaying items by tag: Privacy

The post is the second in a series that looks at the recommendations contained in the report on the Personal Information Protection and Electronic Documents Act (PIPEDA) issued by the House of Commons Standing Committee on Access to Information and Privacy Ethics (ETHI). My first post considered ETHI’s recommendation to retain consent at the heart of PIPEDA with some enhancements. At the same time, ETHI recommended some new exceptions to consent. This post looks at one of these – the exception relating to publicly available information.

Although individual consent is at the heart of the PIPEDA model – and ETHI would keep it there – the growing number of exceptions to consent in PIPEDA is reason for concern. In fact, the last round of amendments to PIPEDA in the 2015 Digital Privacy Act, saw the addition of ten new exceptions to consent. While some of these were relatively uncontroversial (e.g. making it clear that consent was not needed to communicate with the next of kin of an injured, ill or deceased person) others were much more substantial in nature. In its 2018 report ETHI has made several recommendations that continue this trend – creating new contexts in which individual consent will no longer be required for the collection, use or disclosure of personal information. In this post, I focus on one of these – the recommendation that the exception to consent for the use of “publicly available information” be dramatically expanded to include content shared by individuals on social media. In light of the recent Facebook/Cambridge Analytica scandal, this recommended change deserves some serious resistance.

PIPEDA already contains a carefully limited exception to consent to the collection, use or disclosure of personal information where it is “publicly available” as defined in the Regulations Specifying Publicly Available Information. These regulations identify five narrowly construed categories of publicly available information. The first is telephone directory information (but only where the subscriber has the option to opt out of being included in the directory). The second is name and contact information that is included in a professional business directory listing that is available to the public; nevertheless, such information can only be collected, used or disclosed without consent where it relates “directly to the purpose for which the information appears in the registry” (i.e. contacting the individual for business purposes). There is a similar exception for information in a public registry established by law (for example, a land titles registry); this information can similarly only be collected, used or disclosed for purposes related to those for which it appears in the record or document. Thus, consent is not required to collect land registry information for the purposes of concluding a real estate transaction. However, it is not permitted to extract personal information from such a registry, without consent, to use for marketing. A fourth category of publicly available personal information is information appearing in court or tribunal records or documents. This respects the open courts principle, but the exception is limited to collection, use or disclosure that relates directly to the purpose for which the information appears in the record or document. This means that online repositories of court and tribunal decisions cannot be mined for personal information; however, personal information can be used without consent to further the open courts principle (for example, a reporter gathering information to use in a newspaper story).

This brings us to the fifth category of publicly available information – the one ETHI would explode to include vast quantities of personal information. Currently, this category reads:

e) personal information that appears in a publication, including a magazine, book or newspaper, in printed or electronic form, that is available to the public, where the individual has provided the information.

ETHI’s recommendation is to make this “technologically neutral” by having it include content shared by individuals over social media. According to ETHI, a “number of witnesses considered this provision to be “obsolete.” (at p. 27) Perhaps not surprisingly, these witnesses represented organizations and associations whose members would love to have unrestricted access to the contents of Canadians’ social media feeds and pages. The Privacy Commissioner was less impressed with the arguments for change. He stated: “we caution against the common misconception that simply because personal information happens to be generally accessible online, there is no privacy interest attached to it.” (at p. 28) The Commissioner recommended careful study with a view to balancing “fundamental individual and societal rights.” This cautious approach seems to have been ignored. The scope of ETHI’s proposed change is particularly disturbing given the very carefully constrained exceptions that currently exist for publicly available information. A review of the Regulations should tell any reader that this was always intended to be a very narrow exception with tightly drawn boundaries; it was never meant to create a free-for-all open season on the personal information of Canadians.

The Cambridge Analytica scandal reveals the harms that can flow from unrestrained access to the sensitive and wide-ranging types and volumes of personal information that are found on social media sites. Yet even as that scandal unfolds, it is important to note that everyone (including Facebook) seems to agree that user consent was both required and abused. What ETHI recommends is an exception that would obviate the need for consent to the collection, use and disclosure of the personal information of Canadians shared on social media platforms. This could not be more unwelcome and inappropriate.

Counsel for the Canadian Life and Health Insurance Association, in addressing ETHI, indicated that the current exception “no longer reflects reality or the expectations of the individuals it is intended to protect.” (at p. 27) A number of industry representatives also spoke of the need to make the exception “technologically neutral”, a line that ETHI clearly bought when it repeated this catch phrase in its recommendation. The facile rhetoric of technological neutrality should always be approached with enormous caution. The ‘old tech’ of books and magazines involved: a) relatively little exposure of personal information; b) carefully mediated exposure (through editorial review, fact-checking, ethical policies, etc.); c) and time and space limitations that tended to focus publication on the public interest. Social media is something completely different. It is a means of peer-to-peer communication and interaction which is entirely different in character and purpose from a magazine or newspaper. To treat it as the digital equivalent is not technological neutrality, it is technological nonsensicality.

It is important to remember that while the exception to consent for publicly available information exists in PIPEDA; the definition of its parameters is found in a regulation. Amendments to legislation require a long and public process; however, changes to regulations can happen much more quickly and with less room for public input. This recommendation by ETHI is therefore doubly disturbing – it could have a dramatic impact on the privacy rights of Canadians, and could do so more quickly and quietly than through the regular legislative process. The Privacy Commissioner was entirely correct in stating that there should be no change to these regulations without careful consideration and a balancing of interests, and perhaps no change at all.

Published in Privacy

The recent scandal regarding the harvesting and use of the personal information of millions of Facebook users in order to direct content towards them aimed at influence their voting behavior raises some interesting questions about the robustness of our data protection frameworks. In this case, a UK-based professor collected personal information via an app, ostensibly for non-commercial research purposes. In doing so he was bound by terms of service with Facebook. The data collection was in the form of an online quiz. Participants were paid to answer a series of questions, and in this sense they consented to and were compensated for the collection of this personal information. However, their consent was to the use of this information only for non-commercial academic research. In addition, the app was able to harvest personal information from the Facebook friends of the study participants – something which took place without the knowledge or consent of those individuals. The professor later sold his app and his data to Cambridge Analytica, which used it to target individuals with propaganda aimed at influencing their vote in the 2016 US Presidential Election.

A first issue raised by this case is a tip-of-the-iceberg issue. Social media platforms – not just Facebook – collect significant amounts of very rich data about users. They have a number of strategies for commercializing these treasure troves of data, including providing access to the platform to app developers or providing APIs on a commercial basis that give access to streams of user data. Users typically consent to some secondary uses of their personal information under the platform’s terms of service (TOS). Social media platform companies also have TOS that set the terms and conditions under which developers or API users can obtain access to the platform and/or its data. What the Cambridge Analytica case reveals is what may (or may not) happen when a developer breaches these TOS.

Because developer TOS are a contract between the platform and the developer, a major problem is the lack of transparency and the grey areas around enforcement. I have written about this elsewhere in the context of another ugly case involving social media platform data – the Geofeedia scandal (see my short blog post here, full article here). In that case, a company under contract with Twitter and other platforms misused the data it contracted for by transforming it into data analytics for police services that allowed police to target protesters against police killings of African American men. This was a breach of contractual terms between Twitter and the developer. It came to public awareness only because of the work of a third party (in that case, the ACLU of California). In the case of Cambridge Analytica, the story also only came to light because of a whistleblower (albeit one who had been involved with the company’s activities). In either instance it is important to ask whether, absent third party disclosure, the situation would ever have come to light. Given that social media companies provide, on a commercial basis, access to vast amounts of personal information, it is important to ask what, if any, proactive measures they take to ensure that developers comply with their TOS. Does enforcement only take place when there is a public relations disaster? If so, what other unauthorized exploitations of personal information are occurring without our knowledge or awareness? And should platform companies that are sources of huge amounts of personal information be held to a higher standard of responsibility when it comes to their commercial dealing with this personal information?

Different countries have different data protection laws, so in this instance I will focus on Canadian law, to the extent that it applies. Indeed, the federal Privacy Commissioner has announced that he is looking into Facebook’s conduct in this case. Under the Personal Information Protection and Electronic Documents Act (PIPEDA), a company is responsible for the personal information it collects. If it shares those data with another company, it is responsible for ensuring proper limitations and safeguards are in place so that any use or disclosure is consistent with the originating company’s privacy policy. This is known as the accountability principle. Clearly, in this case, if the data of Canadians was involved, Facebook would have some responsibility under PIPEDA. What is less clear is how far this responsibility extents. Clause 4.1.3 of Schedule I to PIPEDA reads: “An organization is responsible for personal information in its possession or custody, including information that has been transferred to a third party for processing. The organization shall use contractual or other means to provide a comparable level of protection while the information is being processed by a third party.” [My emphasis]. One question, therefore, is whether it is enough for Facebook to simply have in place a contract that requires its developers to respect privacy laws, or whether Facebook’s responsibility goes further. Note that in this case Facebook appears to have directed Cambridge Analytica to destroy all improperly collected data. And it appears to have cut Cambridge Analytica off from further access to its data. Do these steps satisfy Facebook’s obligations under PIPEDA? It is not at all clear that PIPEDA places any responsibilities on organizations to actively supervise or monitor companies with which it has shared data under contract. It is fair to ask, therefore, whether in cases where social media platforms share huge volumes of personal data with developers, is the data-sharing framework in PIPEDA sufficient to protect the privacy interests of the public.

Another interesting question arising from the scandal is whether what took place amounts to a data breach. Facebook has claimed that it was not a data breach – from their perspective, this is a case of a developer that broke its contract with Facebook. It is easy to see why Facebook would want to characterize the incident in this way. Data breaches can bring down a whole other level of enforcement, and can also give rise to liability in class action law suits for failure to properly protect the information. In Canada, new data breach notification provisions (which have still not come into effect under PIPEDA) would impose notification requirements on an organization that experienced a breach. It is interesting to note, though, that he data breach notification requirements are triggered where there is a “real risk of significant harm to an individual” [my emphasis]. Given what has taken place in the Cambridge Analytical scandal, it is worth asking whether the drafters of this provision should have included a real risk of significant harm to the broader public. In this case, the personal information was used to subvert democratic processes, something that is a public rather than an individual harm.

The point about public harm is an important one. In both the Geofeedia and the Cambridge Analytica scandals, the exploitation of personal information was on such a scale and for such purposes that although individual privacy may have been compromised, the greater harms were to the public good. Our data protection model is based upon consent, and places the individual and his or her choices at its core. Increasingly, however, protecting privacy serves goals that go well beyond the interests of any one individual. Not only is the consent model broken in an era of ubiquitous and continuous collection of data, it is inadequate to address the harms that come from improper exploitation of personal information in our big data environment.

Published in Privacy

In February 2018 the Standing Committee on Access to Information, Privacy and Ethics (ETHI) issued its report based on its hearings into the state of Canada’s Personal Information Protection and Electronic Documents Act. The Committee hearings were welcomed by many in Canada’s privacy community who felt that PIPEDA had become obsolete and unworkable as a means of protecting the personal information of Canadians in the hands of the private sector. The report, titled Towards Privacy by Design: Review of the Personal Information Protection and Electronic Documents Act seems to come to much the same conclusion. ETHI ultimately makes recommendations for a number of changes to PIPEDA, some of which could be quite significant.

This blog post is the first in a series that looks at the ETHI Report and its recommendations. It addresses the issue of consent.

The enactment of PIPEDA in 2001 introduced a consent-based model for the protection of personal information in the hands of the private sector in Canada. The model has at its core a series of fair information principles that are meant to guide businesses in shaping their collection, use and disclosure of personal information. Consent is a core principle; other principles support consent by ensuring that individuals have adequate and timely notice of the collection of personal information and are informed of the purposes of collection.

Unfortunately, the principle of consent has been drastically undermined by advances in technology and by a dramatic increase in the commercial value of personal information. In many cases, personal information is now actual currency and not just the by-product of transactions, changing the very fundamentals of the consent paradigm. In the digital environment, the collection of personal information is also carried out continually. Not only is personal information collected with every digital interaction, it is collected even while people are not specifically interacting with organizations. For example, mobile phones and their myriad apps collect and transmit personal information even while not in use. Increasingly networked and interconnected appliances, entertainment systems, digital assistants and even children’s toys collect and communicate steady streams of data to businesses and their affiliates.

These developments have made individual consent somewhat of a joke. There are simply too many collection points and too many privacy policies for consumers to read. Most of these policies are incomprehensible to ordinary individuals; many are entirely too vague when it comes to information use and sharing; and individuals can easily lose sight of consents given months or years previously to apps or devices that are largely forgotten but that nevertheless continuing to harvest personal information in the background. Managing consent in this environment is beyond the reach of most. To add insult to injury, the resignation felt by consumers without meaningful options for consent is often interpreted as a lack of interest in privacy. As new uses (and new markets) for personal information continue to evolve, it is clear that the old model of consent is no longer adequate to serve the important privacy interests of individuals.

The ETHI Report acknowledges the challenges faced by the consent model; it heard from many witnesses who identified problems with consent and many who proposed different models or solutions. Ultimately, however, ETHI concludes that “rather than overhauling the consent model, it would be best to make minor adjustments and let the stakeholders – the Office of the Privacy Commissioner (OPC), businesses, government, etc. – adapt their practices in order to maintain and enhance meaningful consent.”(at p. 20)

The fact that the list of stakeholders does not include the public – those whose personal information and privacy are at stake – is telling. It signals ambivalence about the importance of privacy within the PIPEDA framework. In spite of being an interest hailed by the Supreme Court of Canada as quasi-constitutional in nature, privacy is still not approached by Parliament as a human right. The prevailing legislative view seems to be that PIPEDA is meant to facilitate the exchange of personal information with the private sector; privacy is protected to the extent that it is necessary to support public confidence in such exchanges. The current notion of consent places a significant burden on individuals to manage their own privacy and, by extension, places any blame for oversharing on poor choices. It is a cynically neo-liberal model of regulation in which the individual ultimately must assume responsibility for their actions notwithstanding the fact that the deck has been so completely and utterly stacked against them.

The OPC recently issued a report on consent which also recommended the retention of consent as a core principle, but recognized the need to take concrete steps to maintain its integrity. The OPC recommendations included using technological tools, developing more accessible privacy policies, adjusting the level of consent required to the risk of harm, creating no-go zones for the use of personal information, and enhancing privacy protection for children. ETHI’s rather soft recommendations on consent may be premised on an understanding that much of this work will go ahead without legislative change.

Among the minor adjustments to consent recommended by ETHI is that PIPEDA be amended to make opt-in consent the default for any use of personal information for secondary purposes. This means that while there might be opt-out consent for the basic services for which a consumer is contracting (in other words, if you provide your name and address for the delivery of an item, it can be assumed you are consenting to the use of the information for that purpose), consumers must agree to the collection, use or disclosure of their personal information for secondary or collateral purposes. ETHI’s recommendation also indicates that opt-in consent might eventually become the norm in all circumstances. Such a change may have some benefits. Opt out consent is invidious. Think of social media platform default settings that enable a high level of personal information sharing, leaving consumers to find and adjust these settings if they want greater protection for their privacy. An opt-in consent requirement might be particularly helpful in addressing such problems. Nevertheless, it will not be much use in the context of long, complex (and largely unread) privacy policies. Many such policies ask consumers to consent to a broad range of uses and disclosures of personal information, including secondary purposes described in the broadest of terms. A shift to opt-in consent will not help if agreeing to a standard set of unread terms amounts to opting-in.

ETHI also considered whether and how individuals should be able to revoke their consent to the collection, use or disclosure of their personal information. The issues are complex. ETHI gave the example of social media, where information shared by an individual might be further disseminated by many others, making it challenging to give effect to a revocation of consent. ETHI recommends that the government “study the issue of revocation of consent in order to clarify the form of revocation required and its legal and practical implications”.

ETHI also recommended that the government consider specific rules around consent for minors, as well as the collection, use and disclosure of their personal information. Kids use a wide range of technologies, but may be particularly vulnerable because of a limited awareness of their rights and recourses, as well as of the long-term impacts of personal information improvidently shared in their youth. The issues are complex and worthy of further study. It is important to note, however, that requiring parental consent is not an adequate solution if the basic framework for consent is not addressed. Parents themselves may struggle to understand the technologies and their implications and may be already overwhelmed by multiple long and complex privacy policies. The second part of the ETHI recommendation which speaks to specific rules around the collection, use and disclosure of the personal information of minors may be more helpful in addressing some of the challenges in this area. Just as we have banned some forms of advertising directed at children, we might also choose to ban some kinds of collection or uses of children’s personal information.

In terms of enhancing consent, these recommendations are thin on detail and do not provide a great deal of direction. They seem to be informed by a belief that a variety of initiatives to enhance consent through improved privacy policies (including technologically enhanced policies) may suffice. They are also influenced by concerns expressed by business about the importance of maintaining the ‘flexibility’ of the current regime. While there is much that is interesting elsewhere within the ETHI report, the discussion of consent feels incomplete and disappointing. Minor adjustments will not make a major difference.

Up next: One of the features of PIPEDA that has proven particularly challenging when it comes to consent is the ever-growing list of exceptions to the consent requirement. In my next post I will consider ETHI’s recommendations that would add to that list, and that also address ‘alternatives’ to consent.

Published in Privacy

The Office of the Privacy Commissioner of Canada has released its Draft Position on Online Reputation. It’s an important issue and one that is of great concern to many Canadians. In the Report, the OPC makes recommendations for legislative change and proposes other measures (education, for example) to better protect online reputation. However, the report has also generated considerable controversy for the position it has taken on how the Personal Information Protection and Electronic Documents Act currently applies in this context. In this post I will focus on the Commissioner’s expressed view that PIPEDA applies to search engine activities in a way that would allow Canadians to request the de-indexing of personal information from search engines, with the potential to complain to the Commissioner if these demands are not met.

PIPEDA applies to the collection, use and disclosure of personal information in the course of commercial activity. The Commissioner reasons, in this report, that search engines are engaged in commercial activity, even if search functions are free to consumers. An example is the placement of ads in search results. According to the Commissioner, because search engines can provide search results that contain (or lead to) personal information, these search engines are collecting, using and disclosing personal information in the course of commercial activity.

With all due respect, this view seems inconsistent with current case law. In 2010, the Federal Court in State Farm Mutual Automobile Insurance Co. v. Canada (Privacy Commissioner) ruled that an insurance company that collected personal information on behalf of an individual it was representing in a law suit was not collecting that information in the course of commercial activity. This was notwithstanding the fact that the insurance company was a commercial business. The Court was of the view that, at essence, the information was being collected on behalf of a private person (the defendant) so that he could defend a legal action (a private and non-commercial matter to which PIPEDA did not apply). Quite tellingly, at para 106, the court stated: “if the primary activity or conduct at hand, in this case the collection of evidence on a plaintiff by an individual defendant in order to mount a defence to a civil tort action, is not a commercial activity contemplated by PIPEDA, then that activity or conduct remains exempt from PIPEDA even if third parties are retained by an individual to carry out that activity or conduct on his or her behalf.”

The same reasoning applies to search engines. Yes, Google makes a lot of money, some of which comes from its search engine functions. However, the search engines are there for anyone to use, and the relevant activities, for the purposes of the application of PIPEDA, are those of the users. If a private individual carries out a Google search for his or her own purposes, that activity does not amount to the collection of personal information in the course of commercial activity. If a company does so for its commercial purposes, then that company – and not Google – will have to answer under PIPEDA for the collection, use or disclosure of that personal information. The view that Google is on the hook for all searches is not tenable. It is also problematic for the reasons set out by my colleague Michael Geist in his recent post.

I also note with some concern the way in which the “journalistic purposes” exception is treated in the Commissioner’s report. This exception is one of several designed to balance privacy with freedom of expression interests. In this context, the argument is that a search engine facilitates access to information, and is a tool used by anyone carrying out online research. This is true, and for the reasons set out above, PIPEDA does not apply unless that research is carried out in the course of commercial activities to which the statute would apply. Nevertheless, in discussing the exception, the Commissioner states:

Some have argued that search engines are nevertheless exempt from PIPEDA because they serve a journalistic or literary function. However, search engines do not distinguish between journalistic/literary material. They return content in search results regardless of whether it is journalistic or literary in nature. We are therefore not convinced that search engines are acting for “journalistic” or “literary” purposes, or at least not exclusively for such purposes as required by paragraph 4(2)(c).

What troubles me here is the statement that “search engines do not distinguish between journalistic and literary material”. They don’t need to. The nature of what is sought is not the issue. The issue is the purpose. If an individual uses Google in the course of non-commercial activity, PIPEDA does not apply. If a journalist uses Google for journalistic purposes, PIPEDA does not apply. The nature of the content that is searched is immaterial. The quote goes on to talk about whether search engines act for journalistic or literary purposes – that too is not the point. Search engines are tools. They are used by actors. It is the purposes of those actors that are material, and it is to those actors that PIPEDA will apply – if they are collecting, using or disclosing personal information in the course of commercial activity.

The Report is open for comment until April 19, 2018.

Published in Privacy

Canada’s Federal Court of Appeal has handed down a decision that addresses important issues regarding control over commercially valuable data. The decision results from an appeal of an earlier ruling of the Competition Tribunal regarding the ability of the Toronto Real Estate Board (TREB) to limit the uses to which its compilation of current and historical property listings in the Greater Toronto Area (GTA) can be put.

Through its operations, the TREB compiles a vast database of real estate listings. Information is added to the database on an ongoing basis by real estate brokers who contribute data each time a property is listed with them. Real estate agents who are members of TREB in turn receive access to a subset of this data via an electronic feed. They are permitted to make this data available through their individual websites. However, the TREB does not permit all of its data to be shared through this feed; some data is available only through other means such as in-person consultation, or communications of snippets of data via email or fax.

The dispute arose after the Competition Commissioner applied to the Competition Tribunal for a ruling as to whether the limits imposed by the TREB on the data available through the electronic feed inhibited the ability of “virtual office websites” (VOWs) to compete with more conventional real estate brokerages. The tribunal ruled that they did, and the matter was appealed to the Federal Court of Appeal. Although the primary focus of the Court’s decision was on the competition issues, it also addressed questions of privacy and copyright law.

The Federal Court of Appeal found that the TREB’s practices of restricting available data – including information on the selling price of homes – had anticompetitive effects that limited the range of broker services that were available in the GTA, limited innovation, and had an adverse impact on entry into and expansion of relevant markets. This aspect of the decision highlights how controlling key data in a sector of the economy can amount to anti-competitive behavior. Data are often valuable commercial assets; too much exclusivity over data may, however, pose problems. Understanding the limits of control over data is therefore an important and challenging issue for businesses and regulators alike.

The TREB had argued that one of the reasons why it could not provide certain data through its digital feed was because these data were personal information and it had obligations under the Personal Information Protection and Electronic Documents Act to not disclose this information without appropriate consent. The TREB relied on a finding of the Office of the Privacy Commissioner of Canada that the selling price of a home (among those data held back by TREB) was personal information because it could lead to inferences about the individual who sold the house (e.g.: their negotiating skills, the pressure on them to sell, etc.). The Court noted that the TREB already shared the information it collected with its members. Information that was not made available through the digital feed was still available through more conventional methods. In fact, the Court noted that the information was very widely shared. It ruled that the consent provided by individuals to this sharing of information would apply to the sharing of the same information through a digital feed. It stated: “PIPEDA only requires new consent where information is used for a new purpose, not where it is distributed via new methods. The introduction of VOWs is not a new purpose – the purpose remains to provide residential real estate services [. . .].” (at para 165) The Court’s decision was influenced by the fact that the consent form was very broadly worded. Through it, TREB obtained consent to the use and dissemination of the data “during the term of the listing and thereafter.” This conclusion is interesting, as many have argued that the privacy impacts are different depending on how information is shared or disseminated. In other words, it could have a significant impact on privacy if information that is originally shared only on request, is later published on the Internet. Consent to disclosure of the information using one medium might not translate into consent to a much broader disclosure. However, the Court’s decision should be read in the context of both the very broad terms of the consent form and the very significant level of disclosure that was already taking place. The court’s statement that “PIPEDA only requires new consent where information is used for a new purpose, not where it is distributed via new methods” should not be taken to mean that new methods of distribution do not necessarily reflect new purposes that go beyond the original consent.

The Federal Court of Appeal also took note of the Supreme Court of Canada’s recent decision in Royal Bank of Canada v. Trang. In the course of deciding whether to find implied consent to a disclosure of personal information, the Supreme Court of Canada had ruled that while the balance owing on a mortgage was personal information, it was less sensitive than other financial information because the original amount of the mortgage, the rate of interest and the due date for the mortgage were all publicly available information from which an estimate of the amount owing could be derived. The Federal Court of Appeal found that the selling price of a home was similarly capable of being derived from other publicly available data sources and was thus not particularly sensitive personal information.

In addition to finding that there would be no breach of PIPEDA, the Federal Court of Appeal seemed to accept the Tribunal’s view that the TREB was using PIPEDA in an attempt to avoid wider sharing of its data, not because of concerns for privacy, but in order to maintain its control over the data. It found that TREBs conduct was “consistent with the conclusion that it considered the consents were sufficiently specific to be compliant with PIPEDA in the electronic distribution of the disputed data on a VOW, and that it drew no distinction between the means of distribution.” (at para 171)

Finally, the Competition Tribunal had ruled that the TREB did not have copyright in its compilation of data because the compilation lacked sufficient originality in the selection or arrangement of the underlying data. Copyright in a compilation depends upon this originality in selection or arrangement because facts themselves are in the public domain. The Federal Court of Appeal declined to decide the copyright issue since the finding that the VOW policy was anti-competitive meant that copyright could not be relied upon as a defence. Nevertheless, it addressed the copyright question in obiter (meaning that its comments are merely opinion and not binding precedent).

The Federal Court of Appeal noted that the issue of whether there is copyright in a compilation of facts is a “highly contextual and factual determination” (at para 186). The Court of Appeal took note of the Tribunal’s findings that “TREB’s specific compilation of data from real estate listings amounts to a mechanical exercise” (at para 194), and agreed that the threshold for originality was not met. The Federal Court of Appeal dismissed the relevance of TREB’s arguments about the ways in which its database was used, noting that “how a “work” is used casts little light on the question of originality.” (at para 195) The Court also found no relevance to the claims made in TREB’s contracts to copyright in its database. Claiming copyright is one thing, establishing it in law is quite another.

 

Note that leave to appeal this decision to the Supreme Court of Canada was denied on August 23, 2018.

 

Published in Copyright Law

An Ontario small claims court judge has found in favour of a plaintiff who argued that her privacy rights were violated when a two-second video clip of her jogging on a public path was used by the defendant media company in a sales video for a real-estate development client. The plaintiff testified that she had been jogging so as to lose the weight that she had gained after having children. She became aware of the video when a friend drew her attention to it on YouTube, and the image “caused her discomfort and anxiety” (para 5). Judge Leclaire noted that the “image of herself in the video is clearly not the image she wished portrayed publicly”.

At the time of the filming, the defendant’s practice was to seek consent to appear in its videos from people who were filmed in private spaces, but not to do so where people were in public places. The defendant’s managing associate testified that if people in public places “see the camera and continue moving, consent is implied.” (at para 9) The judge noted that it was not established how it could be known whether individuals saw the camera. The plaintiff testified that she had seen the camera, and had attempted to shield her face from view; she believed that this demonstrated that she did not wish to be filmed.

Although the defendant indicated that the goal was to capture the landscape and not the people, the judge found that “people are present and central to the location and the picture.” (at para 10) The judge found that the photographer deliberately sought to include an image of someone engaging in the activity of jogging alongside the river. Although the defendant argued that it would not be practical to seek consent from the hundreds of people who might be captured in a video of a public space, the judge noted that in the last two years, the defendant company had “tightened up” its approach to seeking consent, and now approached people in public areas prior to filming to seek their consent to appear in any resulting video.

The plaintiff argued that there had been a breach of the tort of intrusion upon seclusion, which was first recognized in Ontario by the Ontario Court of Appeal in Jones v. Tsige in 2012. Judge Leclaire stated that the elements of the tort require 1) that the defendant’s actions are intentional or reckless; 2) that there is no lawful justification for the invasion of the plaintiff’s private affairs or concerns; and 3) that the invasion is one that a reasonable person would consider to be “highly offensive causing distress, humiliation or anguish.” (Jones at para 71) Judge Leclaire found that these elements of the tort were made out on the facts before him. The defendant’s conduct in filming the video was clearly intentional. He also found that a reasonable person “would regard the privacy invasion as highly offensive”, noting that “the plaintiff testified as to the distress, humiliation or anguish that it caused her.” (at para 16)

Judge Leclaire clearly felt that the defendant had crossed a line in exploiting the plaintiff’s image for its own commercial purposes. Nevertheless, there are several problems with his application of the tort of intrusion upon seclusion. Not only does he meld the objective “reasonable person” test with a subjective test of the plaintiff’s own feelings about what happened, his decision that capturing the image of a person jogging on a public pathway is an intrusion upon seclusion is in marked contrast to the statement of the Ontario Court of Appeal in Jones v. Tsige, that the tort is relatively narrow in scope:

 

A claim for intrusion upon seclusion will arise only for deliberate and significant invasions of personal privacy. Claims from individuals who are sensitive or unusually concerned about their privacy are excluded: it is only intrusions into matters such as one's financial or health records, sexual practises and orientation, employment, diary or private correspondence that, viewed objectively on the reasonable person standard, can be described as highly offensive. (at para 72)

 

Judge Leclaire provides relatively little discussion about how to address the capture of images of individuals carrying out activities in public spaces. Some have suggested that there is simply no privacy in public space, while others have called for a more contextual inquiry. Such an inquiry was absent in this case. Instead, Judge Leclaire relied upon Aubry v. Vice-Versa a decision of the Supreme Court of Canada, even though that decision was squarely based on provisions of Quebec law which have no real equivalent in common law Canada. The right to one’s image is specifically protected by art. 36 of the Quebec Civil Code, which provides that it is an invasion of privacy to use a person’s “name, image, likeness or voice for a purpose other than the legitimate information of the public”. There is no comparable provision in Ontario law, although the use of one’s name, image or likeness in an advertisement might amount to the tort of misappropriation of personality. In fact, with almost no discussion, Judge Leclaire also found that this tort was made out on the facts and awarded $100 for the use of the plaintiff’s image without permission. It is worth noting that the tort of misappropriation of personality has typically required that a person have acquired some sort of marketable value in their personality in order for there to be a misappropriation of that value.

Judge Leclair awarded $4000 in damages for the breach of privacy which seems to be an exorbitant amount given the range of damages normally awarded in privacy cases in common law Canada. In this case, the plaintiff was featured in a 2 second clip in a 2 minute video that was taken down within a week of being posted. While there might be some basis to argue that other damage awards to have been too low, this one seems surprisingly high.

It is also worth noting that the facts of this case might constitute a breach of the Personal Information Protection and Electronic Documents Act (PIPEDA) which governs the collection, use or disclosure of personal information in the course of commercial activity. PIPEDA also provides recourse in damages, although the road to the Federal Court is a longer one, and that court has been parsimonious in its awards of damages. Nevertheless, given that Judge Leclaire’s preoccupation seems to be with the unconsented-to use of the plaintiff’s image for commercial purposes, PIPEDA seems like a better fit than the tort of intrusion upon seclusion.

Ultimately, this is a surprising decision and seems out of line with a growing body of case law on the tort of intrusion upon seclusion. As a small claims court decision, it will carry little precedential value. The case is therefore perhaps best understood as one involving a person who was jogging at the wrong place at the wrong time, but who sued in the right court at the right time. Nevertheless, it should serve as a warning to those who make commercial use of footage filmed in public spaces; as it reflects a perspective that not all activities in public spaces are ‘public’ in the fullest sense of the word. It highlights as well the increasingly chaotic privacy legal landscape in Canada.

 

Published in Privacy

Metrolinx is the Ontario government agency that runs the Prestocard service used by public transit authorities in Toronto, Ottawa and several other Ontario municipalities. It ran into some trouble recently after the Toronto Star revealed that the organization shared Prestocard data from its users with police without requiring warrants (judicial authorization). The organization has now published its proposals for revising its privacy policies and is soliciting comment on them. (Note: Metrolink has structured its site so that you can only view one of the three proposed changes at a time and must indicate your satisfaction with it and/or your comments before you can view the next proposal. This is problematic because the changes need to be considered holistically. It is also frankly annoying).

The new proposals do not eliminate the sharing of rider information with state authorities without a warrant. Under the new proposals, information will be shared without a warrant in certain exigent circumstances. It will also be shared without a warrant “in other cases, where we are satisfied it will aid in an investigation from which a law enforcement proceeding may be undertaken or is likely to result.” The big change is thus apparently in the clarity of the notice given to users of the sharing – not the sharing itself.

This flabby and open-ended language is taken more or less directly from the province’s Freedom of Information and Protection of Privacy Act (FOIPPA), which governs the public sector’s handling of personal information. As a public agency, Metrolinx is subject to FOIPPA. It is important to note that the Act permits (but does not require) government entities to share information with law enforcement in precisely the circumstances outlined in the policy. However, by adapting its policy to what it is permitted to do, rather than to what it should do, Metrolinx is missing two important points. The first is that the initial outrage over its practices was about information sharing without a warrant, and not about poor notice of such practices. The second is that doing a good job of protecting privacy sometimes means aiming for the ceiling and not the floor.

Location information is generally highly sensitive information as it can reveal a person’s movements, activities and associations. Police would normally need a warrant to obtain this type of information. It should be noted that police are not relieved of their obligations to obtain warrants when seeking information that raises a reasonable expectation of privacy just because a statute permits the sharing of the information. It would be open to the agency to require that a warrant be obtained prior to sharing sensitive customer location data. It is also important to note that some courts have found that the terms of privacy policies may actually alter the reasonable expectation of privacy – particularly when clear notice is given. In other words, even though we might have a reasonable expectation of privacy in location data about our movements, a privacy policy that tells us clearly that this information is going to be shared with police without a warrant could substantially undermine that expectation of privacy. And all of this happens without any ability on our part to negotiate for terms of service,[1] and in the case of a monopoly service such as public transportation, to choose a different provider.

Metrolinx no doubt expects its users to be comforted by the other changes to its policies. It already has some safeguards in place to minimize the information provided to police and to log any requests and responses. They plan to require, in addition, a sign off by the requesting officer and supervisor. Finally, they plan to issue voluntary transparency reports as per the federal government’s Transparency Reporting Guidelines. Transparency reporting is certainly important, as it provides a window onto the frequency with which information sharing takes place. However, these measures do not correct for an upfront willingness to share sensitive personal information without judicial authorization – particularly in cases where there are no exigent circumstances.

As we move more rapidly towards sensor-laden smart cities in which the consumption of basic services and the living of our daily lives will leave longer and longer plumes of data exhaust, it is important to reflect not just on who is collecting our data and why, but on the circumstances in which they are willing to share that data with others – including law enforcement officials. The incursions on privacy are many and from all directions. Public transit is a basic municipal service. It is also one that is essential for lower-income residents, including students.[2]Transit users deserve more robust privacy protections.

Notes:

[1] A recent decision of the Ontario Court of Appeal does seem to consider that the inability to negotiate for terms of service should be taken into account when assessing the impact of those terms on the reasonable expectation of privacy. See: R. v. Orlandis-Habsburgo.

[2] Some universities and colleges have U-Pass agreements which require students to pay additional fees in exchange for Prestocard passes. Universities and colleges should, on behalf of their students, be insisting on more robust privacy.



[

Published in Privacy

In the 2010-2011 school year, a teacher at a London, Ontario high school used a pen camera to make surreptitious video recordings of female students, with a particular emphasis on their cleavage and breasts. A colleague noticed his activity and reported it to the principal, who confiscated the pen camera and called the police. The police found 19 videos on the camera’s memory card, featuring 30 different individuals, 27 of whom were female. A warrant was obtained a week later to search the teacher’s home – the police found nothing beyond a computer mysteriously missing its hard drive. The teacher was ultimately charged with voyeurism.

The offense of voyeurism requires that there be a surreptitious observation (recorded or not) of a “person who is in circumstances that give rise to a reasonable expectation of privacy”. It also requires that the “observation or recording is done for a sexual purpose” (Criminal Code, s. 162(1)(c)). The trial judge had found that the students had a reasonable expectation of privacy in the circumstances, but he inexplicably found that the Crown had not met its burden of showing, beyond a reasonable doubt, that the recordings of their cleavage and breasts was done for a sexual purpose. He stated: “While a conclusion that the accused was photographing the student’s [sic] cleavage for a sexual purpose is most likely, there may be other inferences to be drawn that detract from the only rationale [sic] conclusion required to ground a conviction for voyeurism.” (Trial Decision at para 77) He did not provide any information about what those other inferences might conceivably be.

On appeal, the Crown argued that the trial judge had erred in finding that the filming was not done for a sexual purpose. All of the appellate judges agreed that the judge had indeed erred. The majority noted that the trial judge had failed to identify any other possible inferences in his reasons. They also noted that his description of the teacher’s behavior as “morally repugnant” was “inconsistent with the trial judge’s conclusion that the videos might not have been taken for a sexual purpose.” (Court of Appeal decision at para 47) The majority noted that “[t]his was an overwhelming case of videos focused on young women’s breasts and cleavage” (at para 53), and they concluded that there was no reasonable inference other than that the videos were taken for a sexual purpose. Clearly, the teacher was not checking for skin cancer.

However, the accused had appealed the trial judge’s finding that the students had a reasonable expectation of privacy. The majority of the Court of Appeal agreed, leading to the overall appeal of his acquittal being dismissed. The majority’s reasoning is disturbing, and has implications for privacy more broadly. In determining what a ‘reasonable expectation of privacy’ entailed, the majority relied on a definition of privacy from the Oxford English Dictionary. That learned non-legal tome defines privacy as “a state in which one is not observed or disturbed by other people; the state of being free from public attention.” (at para 93). From this, the majority concluded that location was a key component of privacy. They stated: “A person expects privacy in places where the person can exclude others, such as one’s home or office, or a washroom. It is a place where a person feels confident that they are not being observed.” (at para 94) The majority accepted that there might be some situations in which a person has an expectation of privacy in a public setting, but these would be limited. They gave the example of upskirting as one “where a woman in a public place had a reasonable expectation of privacy that no one would look under her skirt” (at para 96). Essentially, the tent of a woman’s skirt is a private place within a public one.

The trial judge had found a reasonable expectation of privacy in the circumstances on the basis that a student would expect that a teacher would not “breach their relationship of trust by surreptitiously recording them without there consent.” (at para 103). According to the majority, this conflated the reasonable expectation of privacy with the act of surreptitious recording. They stated: “Clearly students expect that a teacher will not secretly observe or record them for a sexual purpose at school. However, that expectation arises from the nature of the required relationship between students and teachers, not from an expectation of privacy.” (at para 105) This approach ignores the fact that the nature of the relationship is part of the context in which the reasonableness of the expectation of privacy must be assessed. The majority flattened the concept of reasonable expectation of privacy to one consideration – location. They stated that “if a person is in a public place, fully clothed and not engaged in toileting or sexual activity, they will normally not be in circumstances that give rise to a reasonable expectation of privacy.” (at para 108)

Justice Huscroft, in dissent is rightly critical of this impoverished understanding of the reasonable expectation of privacy. He began by situating privacy in its contemporary and technological context: “Technological developments challenge our ability to protect privacy: much that was once private because it was inaccessible is now easily accessible and capable of being shared widely.” (at para 116). He observed that “whether a person has a reasonable expectation of privacy is a normative or evaluative question rather than a descriptive or predictive one. It is concerned with identifying a person’s legitimate interests and determining whether they should be given priority over competing interests. To say that a person has a reasonable expectation of privacy in some set of circumstances is to conclude that his or her interest in privacy should be prioritized over other interests.” (at para 117)

Justice Huscroft was critical of the majority’s focus on location as a means of determining reasonable expectations of privacy. He found that the majority’s approach – defining spaces where privacy could reasonably be expected – was both over and under-inclusive. He noted that there are public places in which people have an expectation of privacy, even if that expectation is attenuated. He gave the example of a woman breastfeeding in public. He stated: “Privacy expectations need not be understood in an all-or-nothing fashion. In my view, there is a reasonable expectation that she will not be visually recorded surreptitiously for a sexual purpose. She has a reasonable expectation of privacy at least to this extent.” (at para 125) Justice Huscroft also noted that the majority’s approach was over-inclusive, in that while a person has a reasonable expectation of privacy in their home, it might be diminished if they stood in front of an open window. While location is relevant to the privacy analysis, it should not be determinative.

Justice Huscroft found that the question to be answered in this case was “should high school students expect that their personal and sexual integrity will be protected while they are at school?” (at para 131). He noted that schools were not fully public in the sense that school officials controlled access to the buildings. While the school in question had 24-hour video surveillance, the cameras did not focus on particular students or particular body parts. No access was permitted to the recordings for personal use. The school board had a policy in place that prohibited teachers from making the types of recordings made in this case. All of these factors contributed to the students’ reasonable expectation of privacy. He wrote:

No doubt, students will be seen by other students, school employees and officials while they are at school. But this does not mean that they have no reasonable expectation of privacy. In my view, the students' interest in privacy is entitled to priority over the interests of anyone who would seek to compromise their personal and sexual integrity while they are at school. They have a reasonable expectation of privacy at least to this extent, and that is sufficient to resolve this case. (at para 133)

Justice Huscroft observed that the majority’s approach that requires the reasonable expectation of privacy to be considered outside of the particular context in which persons find themselves would unduly limit the scope of the voyeurism offence.

This case provides an ugly and unfortunate window on what women can expect from the law when it comes to voyeurism and other related offenses. In the course of his reasons, the trial judge stated that ““[i]t may be that a female student’s mode of attire may attract a debate about appropriate reactions of those who observe such a person leading up to whether there is unwarranted and disrespectful ogling” (Trial decision, at para 46). The issue is not just about public space, it is about the publicness of women’s bodies. The accused was acquitted at trial because of the trial judge’s baffling conclusion that the teacher might have had some motive – other than a sexual one – in making the recordings of female students’ breasts and cleavage. Although the Court of Appeal corrected this error, the majority found that female students at high school do not have a reasonable expectation of privacy when it comes to having their breasts surreptitiously filmed by their teachers (who are not allowed, under school board policies, to engage in such activities). The majority fixates on location as the heart of the reasonable expectation of privacy, eschewing a more nuanced approach that would consider those things that actually inform our expectations of privacy.

 

Published in Privacy

The long-term care context is one where privacy interests of employees can come into conflict with the interests of residents and their families. Recent reported cases of abuse in long-term care homes captured on video camera only serve to highlight the tensions regarding workplace surveillance. A June 2017 decision of the Quebec Court of Appeal, Vigi Santé ltée c. Syndicat québécois des employées et employés de service section locale 298 (FTQ), considers the workplace privacy issues in a context where cameras were installed by the family members of a resident and not by the care facility.

The facts of the case were fairly straightforward. The camera was installed by the family of a resident of a long term care facility, but not because of any concerns about potential abuse. Two of the resident’s children live abroad and the camera provided them with a means of maintaining contact with their mother. The camera could be used in conjunction with Skype, and one of the resident’s children present in Quebec regularly used Skype to receive updates about his mother from the private personal care person they also paid to be with their mother for part of the day, six days a week. The camera provided a live feed but did not record images. The operators of the long-term care facility did not have access to the feed. The employees of the facility were informed of the presence of the camera and none objected to it. The privately hired personal care worker was often present when staff provided care, and the court noted that there were no complaints about the presence of this companion. The family never complained about the services provided to the resident; in fact, they indicated that they were very satisfied. The resident had been in two other facilities prior to moving to this one; similar cameras had been used in those facilities.

The employees’ union challenged the installation of the video, and two questions were submitted to an arbitrator for determination. The first question was whether the employer could permit the family members of a resident to install a camera in the resident’s room for the sole purpose of allowing family members to see the resident. The second was whether the employer could permit family members to install a camera in the room of a resident with the goal of overseeing the activities of employee caregivers. The arbitrator had ruled that, as far as employees were concerned, in both cases the camera was a surveillance camera. He went on to find that the employer had no justification in the circumstances for carrying out surveillance on its employees. Judicial review of this decision was sought, and a judge of the Quebec Superior court confirmed the decision. It was appealed to the Court of Appeal.

Under the principles of judicial review, an arbitrator’s decision can only be overturned if it is unreasonable. The Court of Appeal split on this issue with the majority finding the decision to have been unreasonable. The majority emphasized that the arbitrator had found that the family’s motivation for installing the camera was not to carry out surveillance on the staff, and also highlighted the fact that none of the staff had complained about the presence of the camera.

Although the majority agreed that the privacy guarantees of the Quebec Charter of Human Rights and Freedoms protected employees against unjustified workplace surveillance by their employer, they found that the camera installed by the family for the purpose of maintaining contact with a loved one did not constitute employee surveillance. Further, it was not carried out by the employer. They noted in particular the fact that the images were not recorded and the feed was not accessible to the employer. The majority criticized the arbitrator for characterizing the family’s decision to install the camera as being motivated by a disproportionate concern (“une inquietude démesurée”) over their mother’s well-being, because there was no evidence of any mistreatment.

The majoirty cited jurisprudence to support its view that a camera that captured activities of workers was not necessarily a surveillance camera. It noted several Quebec arbitration cases where arbitrators determined that cameras installed by employers to provide security or to protect against industrial espionage were permissible, notwithstanding the fact that they also captured the activities of employees. Any surveillance of employees was incidental to a different and legitimate objective of the employer.

The majority went further, noting that in this case, the issue was whether an individual (or their family) had a right to install a camera in their own living space. For the majority, it was significant that the care home was the resident’s permanent living space because she had lost her ability to live on her own. The camera allowed her to remain in greater contact with her loved ones, including two children who lived abroad. They considered that the family’s choice in this matter had to be given its due weight, and found that the arbitrator should have ruled, in answer to the first question, that the employer could permit the installation of a camera, by family members, for the goal of permitting the family members to maintain contact with a resident.

The second question related to the rights of family members to install cameras with the goal of carrying out surveillance on caregivers. The majority declined to answer this question on because the facts did not provide a sufficient context on which to base a decision. The Court noted that the answer would depend on circumstances which might include whether there had already been complaints or reported concerns, the nature and extent of notice provided to employees, and so on.

Justice Giroux, in dissent, found that it was reasonable for the arbitrator to have characterized the camera as a surveillance camera. The arbitrator had noted that the camera was placed in such a way as to allow for a continuous view of all care provided by employees to the resident. The resolution was good enough to identify them, and in some cases to hear them. While there was no recording of the feed, it was possible to create still photographs through screen capture. The arbitrator had also turned his attention to the special nature of the care home, noting that it was a home to residents but at the same time was a workplace for the employees. The workplace was governed by a collective agreement, and disputes about working conditions were meant to be resolved by an arbitrator, meaning that courts should exercise deference in review. The arbitrator had found that by permitting the installation of the camera by the family of the resident, the employer had adopted as its own the family’s reasons for doing so, and was responsible for establishing that the level of surveillance was consistent with the Quebec Charter. The arbitrator had found that the family members had demonstrated a disproportionate level of concern, and that this could not be a basis for permitting workplace surveillance. He concluded that in his view the decision of the arbitrator should have been upheld.

 

Published in Privacy

In R. v. Orlandis-Habsburgo the Ontario Court of Appeal revisited the Supreme Court of Canada decisions in R. v. Spencer, R. v. Gomboc, and R. v. Plant. The case involved the routine sharing of energy consumption data between an electricity provider and the police. Horizon Utilities Corp. (Horizon) had a practice of regularly reviewing its customers’ energy consumption records, including monthly consumption figures as well as patterns of consumption throughout the day. When Horizon encountered data suggestive of marijuana grow operations, they would send it to the police. This is what occurred in Orlandis-Habsburgo. The police responded by requesting and obtaining additional information from Horizon. They then conducted observations of the accused’s premises. The police used a combination of data provided by Horizon and their own observation data to obtain a search warrant which ultimately led to charges against the accused, who were convicted at trial.

The defendants appealed their convictions, arguing that their rights under s. 8 of the Canadian Charter of Rights and Freedoms had been infringed when the police obtained data from Horizon without a warrant. The trial judge had dismissed these arguments, finding that the data were not part of the “biographical core” of the defendants’ personal information, and that they therefore had no reasonable expectation of privacy in them. Further, he ruled that given the constellation of applicable laws and regulations, as well as Horizon’s terms of service, it was reasonable for Horizon to share the data with the police. The Court of Appeal disagreed, finding that the appellants’ Charter rights had been infringed. The decision is interesting because of its careful reading of the rather problematic decision of the Supreme Court of Canada in Gomboc. Nevertheless, although the decision creates important space for privacy rights in the face of ubiquitous data collection and close collaboration between utility companies and the police, the Court of Appeal’s approach is highly contextual and fact-dependent.

A crucial fact in this case is that the police and Horizon had an ongoing relationship when it came to the sharing of customer data. Horizon regularly provided data to the police, sometimes on its own initiative and sometimes at the request of the police. It provided data about suspect residences as well as data about other customers for comparison purposes. Writing for the unanimous court, Justice Doherty noted that until the proceedings in this case commenced, Horizon had never refused a request from the police for information. He found that this established that the police and Horizon were working in tandem; this was important, since it distinguished the situation from one where a company or whistleblower took specific data to the police with concerns that it revealed a crime had been committed.

The Court began its Charter analysis by considering whether the appellants had a reasonable expectation of privacy in the energy consumption data. The earlier Supreme Court of Canada decisions in Plant and Gomboc both dealt with data obtained by police from utility companies without a warrant. In Plant, the Court had found that the data revealed almost nothing about the lifestyle or activities of the accused, leading to the conclusion that there was no reasonable expectation of privacy. In Gomboc, the Court was divided and issued three separate opinions. This led to some dispute as to whether there was a reasonable expectation of privacy in the data. In Orlandis-Habsburgo, the Crown argued that seven out of nine judges in Gomboc had concluded that there was no reasonable expectation of privacy in electricity consumption data. By contrast, the appellants argued that five of the nine judges in Gomboc had found that there was a reasonable expectation of privacy in such data. The trial judge had sided with the Crown, but the Court of Appeal found otherwise. Justice Doherty noted that all of the judges in Gomboc considered the same factors in assessing the reasonable expectation of privacy: “the nature of the information obtained by the police, the place from which the information was obtained, and the relationship between the customer/accused and the service provider.” (at para 58) He found that seven of the judges in Gomboc had decided the reasonable expectation of privacy issue on the basis of the relationship between the accused and the utility company. At the same time, five of the justices had found that the data was of a kind that had the potential to reveal personal activities taking place in the home. He noted that: “In coming to that conclusion, the five judges looked beyond the data itself to the reasonable inferences available from the data and what those inferences could say about activities within the home.” (at para 66) He noted that this was the approach taken by the unanimous Supreme Court in R. v. Spencer, a decision handed down after the trial judge had reached his decision in Orlandis-Habsburgo. He also observed that the relationship between the customer and the service provider in Orlandis-Habsburgo was different in significant respects from that in Gomboc, allowing the two cases to be distinguished. In Gomboc, a provincial regulation provided that information from utility companies could be shared with the police unless customers explicitly requested to opt-out of such information sharing. No such regulation existed in this case.

Justice Doherty adopted the four criteria set out in Spencer for assessing the reasonable expectation of privacy. There are: “(1) the subject matter of the alleged search; (2) the claimant's interest in the subject matter; (3) the claimant's subjective expectation of privacy in the subject matter; and (4) whether this subjective expectation of privacy was objectively reasonable, having regard to the totality of the circumstances.” (Spencer, at para 18) On the issue of the subject matter of the search, the Court found that the energy consumption data included “both the raw data and the inferences that can be drawn from that data about the activity in the residence.” (at para 75) Because the data and inferences were about a person’s home, the Court found that this factor favoured a finding of a reasonable expectation of privacy. With respect to the interest of the appellants in the data, the Court found that they had no exclusive rights to these data – the energy company had a right to use the data for a variety of internal purposes. The Court described these data as being “subject to a complicated and interlocking myriad of contractual, legislative and regulatory provisions” (at para 80), which had the effect of significantly qualifying (but not negating) any expectation of privacy. Justice Doherty found that the appellants had a subjective expectation of privacy with respect to any activities carried out in their home, and he also found that this expectation of privacy was objectively reasonable. In this respect, he noted that although there were different documents in place that related to the extent to which Horizon could share data with the police, “one must bear in mind that none are the product of a negotiated bargain between Horizon and its customers.” (at para 84) The field of energy provision is highly regulated, and the court noted that “[t]he provisions in the documents to which the customers are a party, permitting Horizon to disclose data to the police, cannot be viewed as a ‘consent’ by the customer, amounting to a waiver of any s. 8 claim the customer might have in the information.” (at para 84) That being said, the Court also cautioned against taking any of the terms of the documents to mean that there was a reasonable expectation of privacy. Justice Doherty noted that “The ultimate question is not the scope of disclosure of personal information contemplated by the terms of the documents, but rather what the community should legitimately expect in terms of personal privacy in the circumstances.” (at para 85) He therefore described the terms of these documents as relevant, but not determinative.

The documents at issue included terms imposed on the utility by the Ontario Energy Board. Under these terms, Horizon is barred from using customer information for purposes other than those for which it was obtained without the customer’s consent. While there is an exception to the consent requirement where the information is “required to be disclosed. . . for law enforcement purposes”, Justice Doherty noted that in this case the police had, at most, requested disclosure – at no point was the information required to be disclosed. He found that the terms of the licence distinguished this case from Gomboc and supported a finding of a reasonable expectation of privacy in the data.

The Court also looked at the Distribution System Code (DSC) which permits disclosure to police of “possible unauthorized energy use”. However, Justice Doherty noted that this term was not defined, and no information was provided in the document as to when it was appropriate to contact police. He found this provision unhelpful in assessing the reasonable expectation of privacy. The Court found the Conditions of Service to be similarly unhelpful. By contrast, the privacy policy provided that the company would protect its customers’ personal information, and explicitly set out the circumstances in which it might disclose information to third parties. One of these was a provision for disclosure “to personas as permitted or required by Applicable Law”. Those applicable laws included the provincial Municipal Freedom of Information and Protection of Privacy Act (MFIPPA) and the federal Personal Information Protection and Electronic Documents Act (PIPEDA) Justice Doherty looked to the Supreme Court of Canada’s interpretation of PIPEDA in Spencer. He found that the exception in PIPEDA that permitted disclosure of information to law enforcement could only occur with “lawful authority” and that “[t]he informal information-sharing arrangement between Horizon and the police described in the evidence is inconsistent with both the terms of Horizon’s licence and the disclosure provisions in PIPEDA.” (at para 104) He also found that it did not amount to “lawful authority” for a request for information.

The respondents argued that s. 32(g) of MFIPPA provided a basis for disclosure. This provision permits disclosures to law enforcement agencies without referencing any need for “lawful authority”. However, Justice Doherty noted that, like PIPEDA, MFIPPA has as its primary goal the protection of personal information. He stated: “That purpose cannot be entirely negated by an overly broad and literal reading of the provisions that create exceptions to the confidentiality requirement.” (at para 106) He noted that while s. 32(g) provides an entity with discretion to release information in appropriate circumstances, the exercise of this discretion requires “an independent and informed judgment” (at para 107) in relation to a specific request for information. The provision could not support the kind of informal, ongoing data-sharing relationship that existed between Horizon and the police. Similarly, the court found that the disclosure could not be justified under the exception in s. 7(3)(d)(i) of PIPEDA that allowed a company to disclose information where it had “reasonable grounds to believe that the information relates to . . . a contravention of the laws of Canada”. While Justice Doherty conceded that such disclosures might be possible, in the circumstances, Horizon “did not make any independent decision to disclose information based on its conclusion that reasonable grounds existed to believe that the appellants were engaged in criminal activity.” (at para 110) It simply passed along data that it thought might be of interest to the police.

Although the Court of Appeal concluded that there was a reasonable expectation of privacy in the energy consumption data, and that the search was unreasonable, it ultimately found that the admission of the evidence would not bring the administration of justice into disrepute. As a result, the convictions were upheld. The court cited, in support of its conclusion that the trial judge had reached his decision prior to the Supreme Court of Canada’s decision in Spencer, and that the error in the judge’s approach was only evident after reading Spencer.

 

Published in Privacy
<< Start < Prev 1 2 3 4 5 6 7 8 9 10 Next > End >>
Page 9 of 18

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law