Tags
access to information
AI
AIDA
AI governance
AI regulation
Ambush Marketing
artificial intelligence
big data
bill c11
Bill c27
copyright
data governance
data protection
Electronic Commerce
freedom of expression
Geospatial
geospatial data
intellectual property
Internet
internet law
IP
open courts
open data
open government
personal information
pipeda
Privacy
smart cities
trademarks
transparency
|
Tuesday, 16 April 2019 06:44
Ontario Budget Bill Will Amend Public Sector Privacy Laws
Schedule 31 and Schedule 41 of Ontario’s new omnibus Budget Bill amend the Freedom of Information and Protection of Privacy Act (FIPPA) and the Municipal Freedom of Information and Protection of Privacy Act (MFIPPA) respectively. One change to both statutes will expand the ability of public sector bodies to share personal information with law enforcement without consent. A more extensive set of amendments to FIPPA constitute another piece of the government’s digital and data strategy, which is further developed in the Simpler, Faster, Better Services Act, another piece of the budget bill discussed in my post here. FIPPA and MFIPPA set the rules for the collection, use and disclosure of personal information by the public sector. MFIPPA applies specifically to municipalities, and FIPPA to the broader public sector. Both statutes prohibit the disclosure of personal information under the custody or control of a public body unless such a disclosure falls under an exception. Currently, both statutes have an exception related to investigations which reads: (g) if disclosure is to an institution or a law enforcement agency in Canada to aid an investigation undertaken with a view to a law enforcement proceeding or from which a law enforcement proceeding is likely to result; The Budget Bill will amend this exception by replacing it with: (g) to an institution or a law enforcement agency in Canada if, (i) the disclosure is to aid in an investigation undertaken by the institution or the agency with a view to a law enforcement proceeding, or (ii) there is a reasonable basis to believe that an offence may have been committed and the disclosure is to enable the institution or the agency to determine whether to conduct such an investigation; Paragraph (g)(i) is essentially the same as the original provision. What is new is paragraph (g)(ii). It broadens the circumstances in which personal information can be shared with law enforcement. Not only that, it does so in the squishiest of terms. There must be a reasonable basis to believe that an offence may have been committed. This is different from a reasonable basis to believe that an offence has been committed. Not only does it lower the threshold in the case of individuals, it may also open the door to the sharing of personal information for law enforcement fishing expeditions. After all, if enough people file for certain benefits, it might be reasonable to believe that an offence may have been committed (there’s always someone who tries to cheat the system, right?). The exception could enable the sharing of a quantity of personal information to permit the use of analytics to look for anomalies that might suggest the commission of on offence. The presence of this amendment in an omnibus budget bill that will receive very little scrutiny or debate contradicts the government’s own statement, in its announcement of its data strategy consultation, that “Data privacy and protection is paramount.” This is not a privacy-friendly amendment. The other set of amendments to FIPPA contained in the budget bill are aimed at something labelled “data integration”. This is a process meant to allow government to derive greater value from its stores of data, by allowing it to generate useful data, including statistical data, to government and its departments and agencies. It allows for the intra-governmental sharing of data for preparing statistics for the purposes of resource management or allocation, as well as the planning and evaluation of the delivery of government funded programs and services, whether they are funded “in whole or in part, directly or indirectly” (s. 49.2(b)). Because these amendments contemplate the use of personal information, there are measures specifically designed to protect privacy. For example, under s. 49.3, personal information is not to be used for data integration unless other data will not serve the purpose, and no more personal information shall be used than is reasonably necessary to meet the purpose. Public notice of the indirect (i.e. not directly from the individual) collection of personal information must be provided under s. 49.4. Any collection of personal information can only take place after data standards provided for in s. 49.14 have been approved by the Privacy Commissioner (s. 49.5). Once collected, steps must be taken to deidentify the personal information. The amendments include a definition of deidentification, which involves the removal of direct identifiers as well as any information “that could be used, either alone or with other information, to identify an individual based on what is reasonably foreseeable in the circumstances” (s. 49.1). Section 49.8 specifically prohibits anyone from using or attempting to use “information that has been identified under this Part, either alone or with other information, to identify an individual”. Provision is made for the disclosure of personal information collected through the data integration scheme in limited circumstances – this includes the unfortunately worded exception discussed above where “there is a reasonable basis to believe that an offence may have been committed”. (s. 49.9(c)(ii)). In terms of transparency, a new s. 49.10 provides for notice to be published on a website setting out information about any collection of personal information by a ministry engaged in data integration. The information provided must include the legal authority for the collection; the type of personal information that may be collected; and the information sources, the purpose of any collection, use or disclosure, as well as the nature of any linkages that will be made. Contact information must also be provided for someone who can answer any questions about the collection, use or disclosure of the personal information. Contact information must also be provided for the Privacy Commissioner. Data standards developed in relation to data integration must also be published (s. 49.14(2)), and any data integration unit that collections personal information must publish an annual report setting out prescribed information (s. 49.13). Section 49.11 mandates the safe storage and disposal of any personal information, and sets retention limits. It also provides for data breach notification to be made to affected individuals as well as to the Commissioner. The Commissioner has the power, under s. 49.12 to review the practices and procedures of any data integration unit if the Commissioner “has reason to believe that the requirements of this Part are not being complied with”. The Commissioner has power to make orders regarding the discontinuance or the modification of practices or procedures, and can also order the destruction of personal information or require the adoption of a new practice or procedure. The amendments regarding data integration are clearly designed to facilitate a better use of government data for the development and delivery of programs and services and for their evaluation. These are important measures and seem to have received some careful attention in the amendments. Once again, however, these seem to be important pieces of the data strategy for which the government has recently launched a consultation process that seems to be becoming more irrelevant by the day. Further, as part of an omnibus budget bill, these measures will not receive much in the way of discussion or debate. This is particularly unfortunate for two reasons. First, as the furore over Statistics Canada’s foray into using personal information to generate statistical data shows, transparency, public input and good process are important. Second, the expansion of bases on which personal information shared with government can be passed along to law enforcement merits public scrutiny, debate and discussion. Encroachments on privacy slipped by on the sly should be particularly suspect.
Published in
Privacy
Thursday, 04 April 2019 12:54
Open Banking & Data Ownership
On April 4, 2019 I appeared before the Senate Standing Committee on Banking, Trade and Commerce (BANC) which has been holding hearings on Open Banking, following the launch of a public consultation on Open Banking by the federal government. Open banking is an interesting digital innovation initiative with both potential and risks. I wrote earlier about open banking and some of the privacy issues it raises here. I was invited by the BANC Committee to discuss ‘data ownership’ in relation to open banking. The text of my open remarks to the committee is below. My longer paper on Data Ownership is here. _______________ Thank you for this invitation and opportunity to meet with you on the very interesting subject of Open Banking, and in particular on data ownership questions in relation to open banking. I think it is important to think about open banking as the tip of a data iceberg. In other words, if Canada moves forward with open banking, this will become a test case for rendering standardized data portable in the hands of consumers with the goal of providing them with more opportunities and choices while at the same time stimulating innovation. The question of data ownership is an interesting one, and it is one that has become of growing importance in an economy that is increasingly dependent upon vast quantities of data. However, the legal concept of ‘ownership’ is not a good fit with data. There is no data ownership right per se in Canadian law (or in law elsewhere in comparable jurisdictions, although in the EU the idea has recently been mooted). Instead, we have a patchwork of laws that protect certain interests in data. I will give you a very brief overview before circling back to data portability and open banking. The law of confidential information exists to protect interests in information/data that is kept confidential. Individuals or corporations are often said to ‘own’ confidential information. But the value of this information lies in its confidentiality, and this is what the law protects. Once confidentiality is lost, so is exclusivity – the information is in the public domain. The Supreme Court of Canada in 1988 also weighed in on the issue of data ownership – albeit in the criminal law context. They ruled in R. v. Stewart that information could not be stolen for the purposes of the crime of theft, largely because of its intangible nature. Someone could memorize a confidential list of names without removing the list from the possession of its ‘owner’. The owner would be deprived of nothing but the confidentiality of and control over the information. It is a basic principle of copyright law that facts are in the public domain. There is good reason for this. Facts are seen as the building blocks of expression, and no one should have a monopoly over them. Copyright protects only the original expression of facts. Under copyright law, it is possible to have protection for a compilation of facts – the original expression will lie in the way in which the facts are selected or arranged. It is only that selection or arrangement that is protected – not the underlying facts. This means that those who create compilations of fact may face some uncertainty as to their existence and scope of any copyright. The Federal Court of Appeal, for example, recently ruled that there was no copyright in the Ontario Real Estate Board’s real estate listing data. Of course, the growing value of data is driving some interesting arguments – and decisions – in copyright law. A recent Canadian case raises the possibility that facts are not the same as data under copyright law. This issue has also arisen in the US. Some data are arguably ‘authored’, in the sense that they would not exist without efforts to create them. Predictive data generated by algorithms are an example, or data that require skill, judgment and interpretation to generate. Not that many years ago, Canada Post advanced the argument that they had copyright in a postal code. In the US, a handful of cases have recognized certain data as being ‘authored’, but even in those cases, copyright protection has been denied on other grounds. According ownership rights over data – and copyright law provides a very extended period of protection – would create significant issues for expression, creation and innovation. The other context in which the concept of data ownership arises is in relation to personal information. Increasingly we hear broad statements about how individuals ‘own’ their personal information. These are not statements grounded in law. There is no legal basis for individuals to be owners of their personal information. Individuals do have interests in their personal information. These interests are defined and protected by privacy and data protection laws (as well as by other laws relating to confidentiality, fiduciary duties, and so on). The GDPR in Europe was a significant expansion/enhancement of these interests, and reform of PIPEDA in Canada – if it ever happens – could similarly enhance the interests that individuals have in their personal data. Before I speak more directly of these interests – and in particular of data portability – I want to just mention why it is that it is difficult to conceive of interests in personal data in terms of ownership. What personal data could you be said to own, and what would it mean? Some personal data is observable in public contexts. Do you own your name and address? Can you prevent someone from observing you at work every day and deciding you are regularly late and have no dress sense? Is that conclusion your personal information or their opinion? Or both? If your parents’ DNA might reveal your own susceptibility to particular diseases, is their DNA your personal information? If an online bookstore profiles you as someone who likes to read Young Adult Literature – particularly vampire themed – is that your personal information or is it the bookstore’s? Or is it both? Data is complex and there may be multiple interests implicated in the creation, retention and use of various types of data – whether it is personal or otherwise. Ownership – a right to exclusive possession – is a poor fit in this context. And the determination of ownership on the basis of the ‘personal’ nature of the data will overlook the fact that there may be multiple interests entangled in any single datum. What data protection laws do is define the nature and scope of a person’s interest in their personal information in particular contexts. In Canada, we have data protection laws that apply with respect to the public sector, the private sector, and the health sector. In all cases, individuals have an interest in their personal information which is accompanied by a number of rights. One of these is consent – individuals generally have a right to consent to the collection, use or disclosure of their personal information. But consent for collection is not required in the public sector context. And PIPEDA has an ever-growing list of exceptions to the requirements for consent to collection, use or disclosure. This shows how the interest is a qualified one. Fair information principles reflected in our data protection laws place a limit on the retention of personal information – when an organization that has collected personal information that is now no longer required for the purpose for which it is collected, their obligation is to securely dispose of it – not to return it to the individual. The individual has an interest in their personal information, but they do not own it. And, as data protection laws make clear, the organizations that collect, use and disclose personal information also have an interest in it – and they may also assert some form of ownership rights over their stores of personal information. As I mentioned earlier, the GDPR has raised the bar for data protection world-wide. One of the features of the GDPR is that it greatly enhances the nature and quality of the data subject’s interest in their personal information. The right to erasure, for example, limited though it might be, gives individuals control over personal information that they may have, at one time, shared publicly. The right of data portability – a right that is reflected to some degree in the concept of open banking – is another enhancement of the control exercised by individuals over their personal information. What portability means in the open banking context is that individuals will have the right to provide access to their personal financial data to a third party of their choice (presumably from an approved list). While technically they can do that now, it is complicated and not without risk. In open banking, the standard data formats will make portability simple, and will enhance the ability to bring the data together for analysis and to provide new tools and services. Although individuals will still not own their data, they will have a further degree of control over it. Thus, open banking will enhance the interest that individuals have in their personal financial information. This is not to say that it is not without risks or challenges.
Published in
Privacy
Thursday, 07 February 2019 08:09
Ontario Launches Data Strategy Consultation
On February 5, 2019 the Ontario Government launched a Data Strategy Consultation. This comes after a year of public debate and discussion about data governance issues raised by the proposed Quayside smart cities development in Toronto. It also comes at a time when the data-thirsty artificial intelligence industry in Canada is booming – and hoping very much to be able to continue to compete at the international level. Add to the mix the view that greater data sharing between government departments and agencies could make government ‘smarter’, more efficient, and more user-friendly. The context might be summed up in these terms: the public is increasingly concerned about the massive and widespread collection of data by governments and the private sector; at the same time, both governments and the private sector want easier access to more and better data. Consultation is a good thing – particularly with as much at stake as there is here. This consultation began with a press release that links to a short text about the data strategy, and then a link to a survey which allows the public to provide feedback in the form of answers to specific questions. The survey is open until March 7, 2019. It seems that the government will then create a “Minister’s Task Force on Data” and that this body will be charged with developing a draft data strategy that will be opened for further consultation. The overall timeline seems remarkably short, with the process targeted to wrap up by Fall 2019. The press release telegraphs the government’s views on what the outcome of this process must address. It notes that 55% of Canada’s Big data vendors are located in Ontario, and that government plans “to make life easier for Ontarians by delivering simpler, faster and better digital services.” The goal is clearly to develop a data strategy that harnesses the power of data for use in both the private and public sectors. If the Quayside project has taught anyone anything, it is that people do care about their data in the hands of both public and private sector actors. The press release acknowledges this by referencing the need for “ensuring that data privacy and protection is paramount, and that data will be kept safe and secure.” Yet perhaps the Ontario government has not been listening to all of the discussions around Quayside. While the press release and the introduction to the survey talk about privacy and security, neither document addresses the broader concerns that have been raised in the context of Quayside, nor those that are raised in relation to artificial intelligence more generally. There are concerns about bias and discrimination, transparency in algorithmic decision-making, profiling, targeting, and behavioural modification. Seamless sharing of data within government also raises concerns about mass surveillance. There is also a need to consider innovative solutions to data governance and the role the government might play in fostering or supporting these. There is no doubt that the issues underlying this consultation are important ones. It is clear that the government intends to take steps to facilitate intra-governmental sharing of data as well as greater sharing of data between government and the private sector. It is also clear that much of that data will ultimately be about Ontarians. How this will happen, and what rights and values must be protected, are fundamental questions. As is the case at the provincial and federal level across the country, the laws which govern data in Ontario were written for a different era. Not only are access to information and protection of privacy laws out of date, data-driven practices increasingly impact areas such as consumer protection, competition, credit reporting, and human rights. An effective data strategy might need to reach out across these different areas of law and policy. Privacy and security – the issues singled out in the government’s documents – are important, but privacy must mean more than the narrow view of protecting identifiable individuals from identity theft. We need robust safeguards against undue surveillance, assurances that our data will not be used to profile or target us or our communities in ways that create or reinforce exclusion or disadvantage; we need to know how privacy and autonomy will be weighed in the balance against the stimulation of the economy and the encouragement of innovation. We also need to consider whether there are uses to which our data should simply not be put. Should some data be required to be stored in Canada, and if so in what circumstances? These and a host of other questions need to be part of the data strategy consultation. Perhaps a broader question might be why we are talking only about a data strategy and not a digital strategy. The approach of the government seems to focus on the narrow question of data as both an input and output – but not on the host of other questions around the digital technologies fueled by data. Such questions might include how governments should go about procuring digital technologies, the place of open source in government, the role and implication of technology standards – to name just a few. With all of these important issues at stake, it is hard not to be disappointed by the form and substance of at least this initial phase of the government's consultation. It is difficult to say what value will be derived from the survey which is the vehicle for initial input. Some of the questions are frankly vapid. Consider question 2:
2. I’m interested in exploring the role of data in: creating economic benefits increasing public trust and confidence better, smarter government other
There is no box in which to write in what the “other” might be. And questions 9 to 11 provide sterling examples of leading questions:
9. Currently, the provincial government is unable to share information among ministries requiring individuals and businesses to submit the same information each time they interact with different parts of government. Do you agree that the government should be able to securely share data among ministries? Yes No I’m not sure
10. Do you believe that allowing government to securely share data among ministries will streamline and improve interactions between citizens and government? Yes No I’m not sure
11. If government made more of its own data available to businesses, this data could help those firms launch new services, products, and jobs for the people of Ontario. For example, government transport data could be used by startups and larger companies to help people find quicker routes home from work. Would you be in favour of the government responsibly sharing more of its own data with businesses, to help them create new jobs, products and services for Ontarians? Yes No I’m not sure
In fairness, there are a few places in the survey where respondents can enter their own answers, including questions about what issues should be put to the task force and what skills and experience members should have. Those interested in data strategy should be sure to provide their input – both now and in the later phases to come.
Published in
Privacy
Tuesday, 22 January 2019 16:56
Canada's Shifting Privacy LandscapeNote: This article was originally published by The Lawyer’s Daily (www.thelawyersdaily.ca), part of LexisNexis Canada Inc. In early January 2019, Bell Canada caught the media spotlight over its “tailored marketing program”. The program will collect massive amounts of personal information, including “Internet browsing, streaming, TV viewing, location information, wireless and household calling patterns, app usage and the account information”. Bell’s background materials explain that “advertising is a reality” and that customers who opt into the program will see ads that are more relevant to their needs or interests. Bell promises that the information will not be shared with third party advertisers; instead it will enable Bell to offer those advertisers the ability to target ads to finely tuned categories of consumers. Once consumers opt in, their consent is presumed for any new services that they add to their account. This is not the first time Bell has sought to collect vast amounts of data for targeted advertising purposes. In 2015, it terminated its short-lived and controversial “Relevant Ads” program after an investigation initiated by the Privacy Commissioner of Canada found that the “opt out” consent model chosen by Bell was inappropriate given the nature, volume and sensitivity of the information collected. Nevertheless, the Commissioner’s findings acknowledged that “Bell’s objective of maximizing advertising revenue while improving the online experience of customers was a legitimate business objective.” Bell’s new tailored marketing program is based on “opt in” consent, meaning that consumers must choose to participate and are not automatically enrolled. This change and the OPC’s apparent acceptance of the legitimacy of targeted advertising programs in 2015 suggest that Bell may have brought its scheme within the parameters of PIPEDA. Yet media coverage of the new tailored ads program generated public pushback, suggesting that the privacy ground has shifted since 2015. The rise of big data analytics and the stunning recent growth of artificial intelligence have sharply changed the commercial value of data, its potential uses, and the risks it may pose to individuals and communities. After the Cambridge Analytica scandal, there is also much greater awareness of the harms that can flow from consumer profiling and targeting. While conventional privacy risks of massive personal data collection remain (including the risk of data breaches, and enhanced surveillance), there are new risks that impact not just privacy but consumer choice, autonomy, and equality. Data misuse may also have broader impacts than just on individuals; such impacts may include group-based discrimination, and the kind of societal manipulation and disruption evidenced by the Cambridge Analytica scandal. It is not surprising, then, that both the goals and potential harms of targeted advertising may need rethinking; along with the nature and scope of data on which they rely. The growth of digital and online services has also led to individuals effectively losing control over their personal information. There are too many privacy policies, they are too long and often obscure, products and services are needed on the fly and with little time to reflect, and most policies are ‘take-it-or-leave-it”. A growing number of voices are suggesting that consumers should have more control over their personal information, including the ability to benefit from its growing commercial value. They argue that companies that offer paid services (such as Bell) should offer rebates in exchange for the collection or use of personal data that goes beyond what is needed for basic service provision. No doubt, such advocates would be dismayed by Bell’s quid pro quo for its collection of massive amounts of detailed and often sensitive personal information: “more relevant ads”. Yet money-for-data schemes raise troubling issues, including the possibility that they could make privacy something that only the well-heeled can afford. Another approach has been to call for reform of the sadly outdated Personal Information Protection and Electronic Documents Act. Proposals include giving the Privacy Commissioner enhanced enforcement powers, and creating ‘no go zones’ for certain types of information collection or uses. There is also interest in creating new rights such as the right to erasure, data portability, and rights to explanations of automated processing. PIPEDA reform, however, remains a mirage shimmering on the legislative horizon. Meanwhile, the Privacy Commissioner has been working hard to squeeze the most out of PIPEDA. Among other measures, he has released new Guidelines for Obtaining Meaningful Consent, which took effect on January 1, 2019. These guidelines include a list of “must dos” and “should dos” to guide companies in obtaining adequate consent While Bell checks off many of the ‘must do’ boxes with its new program, the Guidelines indicate that “risks of harm and other consequences” of data collection must be made clear to consumers. These risks – which are not detailed in the FAQs related to the program – obviously include the risk of data breach. The collected data may also be of interest to law enforcement, and presumably it would be handed over to police with a warrant. A more complex risk relates to the fact that internet, phone and viewing services are often shared within a household (families or roommates) and targeted ads based on viewing/surfing/location could result in the disclosure of sensitive personal information to other members of the household. Massive data collection, profiling and targeting clearly raise issues that go well beyond simple debates over opt-in or opt-out consent. The privacy landscape is changing – both in terms of risks and responses. Those engaged in data collection would be well advised to be attentive to these changes.
Published in
Privacy
Friday, 04 January 2019 10:46
Court Decision Touches on the Uncertain Fate of Personal Information in Bankruptcy Proceedings
In Netlink Computer Inc. (Re), the British Columbia Supreme Court dismissed an application for leave to sue a trustee in bankruptcy for the an alleged improper disposal of assets of a bankrupt company that contained the personal information of the company’s customers. The issues at the heart of the application first reached public attention in September 2018 when a security expert described in a blog post how he noticed that servers from the defunct company were listed for sale on Craigslist. Posing as an interested buyer, he examined the computers and found that their unwiped hard drives contained what he reported as significant amounts of sensitive customer data, including credit card information and photographs of customer identification documents. Following the blog post, the RCMP and the BC Privacy Commissioner both launched investigations. Kipling Warner, who had been a customer of the defunct company Netlink, filed law suits against Netlink, the trustee in bankruptcy which had disposed of Netlink’s assets, the auction company Able Solutions, which and sold the assets, and Netlink’s landlord. All of the law suits include claims of breach statutory obligations under the Personal Information Protection and Electronic Documents Act, breach of B.C.’s Privacy Act, and breach of B.C.’s Personal Information Protection Act. The plan was to have the law suits certified as class action proceedings. The action against Netlink was stayed due to the bankruptcy. The B.C. Supreme Court decision deals only with the action against the trustee, as leave of the court must be obtained in order to sue a trustee in bankruptcy. As Master Harper explained in his reasons for decision, the threshold for granting leave to sue a trustee in bankruptcy is not high. The evidence presented in the claim must advance a prima facie case. Leave to proceed will be denied if the proposed action is considered frivolous or vexations, since such a lawsuit would “interfere with the due administration of the bankrupt’s estate by the trustee” (at para 9). Essentially the court must balance the competing interests of the party suing the trustee and the interest in the efficient and timely wrapping up of the bankrupt’s estate. The decision to dismiss the application in this case was based on a number of factors. Master Harper was not impressed by the fact that the multiple law suits brought against different actors all alleged the same grounds. He described this as a “scattergun approach” that suggested a weak evidentiary foundation. The application was supported by two affidavits, one from Mr. Warner, which he described as being based on inadmissible ‘double hearsay’ and one from the blogger, Mr. Doering. While Master Harper found that the Doering affidavit contained first hand evidence from Doering’s investigation into the servers sold on Craigslist, he noted that Doering himself had not been convinced by the seller’s statements about how he came to be in possession of the servers. The Master noted that this did not provide a basis for finding that it was the trustee in bankruptcy who was responsible. The Master also noted that although an RCMP investigation had been launched at the time of the blog post, it had since concluded with no charges being laid. The Master’s conclusion was that there was no evidence to support a finding that any possible privacy breach “took place under the Trustee’s ‘supervision and control’.” (at para 58) Although the application was dismissed, the case does highlight some important concerns about the handling of personal information in bankruptcy proceedings. Not only can customer databases be sold as assets in bankruptcy proceedings, Mr Doering’s blog post raised the spectre of computer servers and computer hard drives being disposed of without properly being wiped of the personal data that they contain. Although he dismissed the application to file suit against the Trustee, Master Harper did express some concern about the Trustee’s lack of engagement with some of the issues raised by Mr. Warner. He noted that no evidence was provided by the Trustee “as to how, or if, the Trustee seeks to protect the privacy of customers when a bankrupt’s assets (including customer information) are sold in the bankruptcy process.” (at para 44) This is an important issue, but it is one on which there is relatively little information or discussion. A 2009 blog post from Quebec flags some of the concerns raised about privacy in bankruptcy proceedings; a more recent post suggests that while larger firms are more sophisticated in how they deal with personal information assets, the data in the hands of small and medium sized firms that experience bankruptcy may be more vulnerable.
Published in
Privacy
Monday, 17 December 2018 06:43
Whose Data Is It? A Key Question for the Quayside Development
Digital and data governance is challenging at the best of times. It has been particularly challenging in the context of Sidewalk Labs’ proposed Quayside development for a number of reasons. One of these is (at least from my point of view) an ongoing lack of clarity about who will ‘own’ or have custody or control over all of the data collected in the so-called smart city. The answer to this question is a fundamentally important piece of the data governance puzzle. In Canada, personal data protection is a bit of a legislative patchwork. In Ontario, the collection, use or disclosure of personal information by the private sector, and in the course of commercial activity, is governed by the federal Personal Information Protection and Electronic Documents Act (PIPEDA). However, the collection, use and disclosure of personal data by municipalities and their agencies is governed by the Municipal Freedom of Information and Protection of Privacy Act (MFIPPA), while the collection, use and disclosure of personal data by the province is subject to the Freedom of Information and Protection of Privacy Act (FIPPA). The latter two statutes – MFIPPA and FIPPA – contain other data governance requirements for public sector data. These relate to transparency, and include rules around access to information. The City of Toronto also has information management policies and protocols, including its Open Data Policy. The documentation prepared for the December 13, 2018 Digital Strategy Advisory Panel (DSAP) meeting includes a slide that sets out implementation requirements for the Quayside development plan in relation to data and digital governance. A key requirement is: “Compliance with or exceedance of all applicable laws, regulations, policy documents and contractual obligations” (page 95). This is fine in principle, but it is not enough on its own to say that the Quayside project must “comply with all applicable laws”. At some point, it is necessary to identify what those applicable laws are. This has yet to be done. And the answer to the question of which laws apply in the context of privacy, transparency and data governance, depends upon who ultimately is considered to ‘own’ or have ‘custody or control’ of the data. So – whose data is it? It is troubling that this remains unclear even at this stage in the discussions. The fact that Sidewalk Labs has been asked to propose a data governance scheme suggests that Sidewalk and Waterfront may be operating under the assumption that the data collected in the smart city development will be private sector data. There are indications buried in presentations and documentation that also suggest that Sidewalk Labs considers that it will ‘own’ the data. There is a great deal of talk in meetings and in documents about PIPEDA, which also indicates that there is an assumption between the parties that the data is private sector data. But what is the basis for this assumption? Governments can contract with a private sector company for data collection, data processing or data stewardship – but the private sector company can still be considered to act as an agent of the government, with the data being legally under the custody or control of the government and subject to public sector privacy and freedom of information laws. The presence of a private sector actor does not necessarily make the data private sector data. If the data is private sector data, then PIPEDA will apply, and there will be no applicable access to information regime. PIPEDA also has different rules regarding consent to collection than are found in MFIPPA. If the data is considered ultimately to be municipal data, then it will be subject to MFIPPA’s rules regarding access and privacy, and it will be governed by the City of Toronto’s information management policies. These are very different regimes, and so the question of which one applies is quite fundamental. It is time for there to be a clear and forthright answer to this question.
Published in
Privacy
Monday, 27 August 2018 06:54
Judge rebuffs tax authority's "fishing expedition" in utility company's databases
A recent Federal Court decision highlights the risks to privacy that could flow from unrestrained access by government to data in the hands of private sector companies. It also demonstrates the importance of judicial oversight in ensuring transparency and the protection of privacy. The Income Tax Act (ITA) gives the Minister of National Revenue (MNR) the power to seek information held by third parties where it is relevant to the administration of the income tax regime. However, where the information sought is about unnamed persons, the law requires judicial oversight. A judge of the Federal Court must review and approve the information “requirement”. Just such a matter arose in Canada (Minister of National Revenue) v. Hydro-Québec. The MNR sought information from Hydro-Québec, the province’s electrical utility, about a large number of its business customers. Only a few classes of customers, such as heavy industries that consumed very large amounts of electricity were excluded. Hydro itself did not object to the request and was prepared to fulfil it if ordered to do so by the Federal Court. The request was considered by Justice Roy who noted that because the information was about unnamed and therefore unrepresented persons, it was “up to the Court to consider their interests.” (at para 5) Under s. 231.2(3) of the ITA, before ordering the disclosure of information about unnamed persons, a must be satisfied that: (a) the person or group is ascertainable; and (b) the requirement is made to verify compliance by the person or persons with any duty or obligation under this Act. The information sought from Hydro in digital format included customer names, business numbers, full billing addresses, addresses of each place where electricity is consumed, telephone numbers associated with the account, billing start dates, and, if applicable, end dates, and any late payment notices sent to the customer. Justice Roy noted that no information had been provided to the court to indicate whether the MNR had any suspicions about the tax compliance of business customers of Hydro-Quebec. Nor was there much detail about what the MNR planned to do with the information. The documents provided by the MNR, as summarized by the Court, stated that the MNR was “looking to identify those who seem to be carrying on a business but failed to file all the required income tax returns.” (at para 14) However, Justice Roy noted that there were clearly also plans to share the information with other groups at the Canada Revenue Agency (CRA). These groups would use the information to determine “whether the individuals and companies complied with their obligations under the ITA and the ETA”. (at para 14) Justice Roy was sympathetic to the need of government to have powerful means of enforcing tax laws that depend upon self-reporting of income. However, he found that what the MNR was attempting to do under s. 231.2 went too far. He ruled that the words used in that provision had to be interpreted in light of “the right of everyone to be left alone by the state”. (at para 28) He observed that it is clear from the wording of the Act that “Parliament wanted to limit the scope of the Minister’s powers, extensive as they are.” (at para 68) Justice Roy carefully reviewed past jurisprudence interpreting s. 231.2(3). He noted that the section has always received a strict interpretation by judges. In past cases where orders had been issued, the groups of unnamed persons about whom information was sought were clearly ascertainable, and the information sought was ‘directly related to these taxpayers’ tax status because it is financial in nature.” (at para 63) In the present case, he found that the group was not ascertainable, and the information sought “has nothing to do with tax-status.” (at para 63) In his view, the aim of the request was to determine the identity of business customers of Hydro-Québec. The information was not sought in relation to a good faith audit, and with a proper factual basis. Because it was a fishing expedition meant to determine who might suitably be audited, the group of individuals identified by Hydro-Québec could not be considered “ascertainable”, as was required by the law. Justice Roy noted that no information was provided to demonstrate what “business customer” meant. He observed that “the Minister would render the concept of “ascertainable group” meaningless if, in the context of the ITA, she may claim that any group is an ascertainable group.” (at para 78) He opined that giving such broad meaning to “ascertainable” could be an abuse that would lead to violations of privacy by the state. Justice Roy also found that the second condition of s. 231.2(3) was not met. Section 231.2(3)(b) required that the information be sought in order “to verify compliance by the person or persons in the group with any duty or obligation under this Act.” He observed that the MNR was seeking an interpretation of this provision that would amount to: “Any information the Minister may consider directly or indirectly useful”. (at para 80) Justice Roy favoured a much more restrictive interpretation, limiting it to information that could “shed light on compliance with the Act.” (at para 80) He found that “the knowledge of who has a business account with Hydro-Québec does not meet the requirement of a more direct connection between the information and documents and compliance with the Act.” (at para 80) The MNR had argued that if the two conditions of s. 231.2(3) were met, then a judge was required to issue the authorization. Because Justice Roy found the two conditions were not met, the argument was moot. Nevertheless, he noted that even if he had found the conditions to be met, he would still have had the discretion to deny the authorization if to grant it would harm the public interest. In this case, there would be a considerable invasion of privacy “given the number of people indiscriminately included in the requirement for which authorization of the Court is being sought. (at para 88) He also found that the fact that digital data was sought increased the general risk of harm. He observed that “the applicant chose not to restrict the use she could make of the large quantity of information she received” (at para 91) and that it was clearly planned that the information would be shared within the CRA. Justice Roy concluded that even if he erred in his interpretation of the criteria in s. 231.2(3), and these criteria had to be given a broad meaning, he would still not have granted the authorization on the basis that “judicial intervention is required to prevent such an invasion of the privacy of many people in Quebec.” (at para 96) Such intervention would particularly be required where “the fishing expedition is of unprecedented magnitude and the information being sought is far from serving to verify compliance with the Act.” (at para 96) This is a strong decision which clearly protects the public interest. It serves to highlight the privacy risks in an era where both private and public sectors amass vast quantities of personal information in digital form. Although the ITA provides a framework to ensure judicial oversight in order to limit potential abuses, there are still far too many other contexts where information flows freely and where there may be insufficient oversight, transparency or accountability.
Published in
Privacy
Wednesday, 25 July 2018 12:29
Social media profiles and PIPEDA's "Publicly Available Information" Exception to Consent
A recent Finding from the Office of the Privacy Commissioner of Canada contains a consideration of the meaning of “publicly available information”, particularly as it relates to social media profiles. This issue is particularly significant given a recent recommendation by the ETHI committee in its Report on PIPEDA reform. PIPEDA currently contains a very narrowly framed exception to the requirement of consent for “publicly available information”. ETHI had recommended amending the definition to make it “technologically neutral”. As I argued here, such a change would make it open-season for the collection, use and disclosure of social media profiles of Canadians. The Finding, issued on June 12, 2018, came after multiple complaints were filed by Canadians about the practices of a New Zealand-based social media company, Profile Technology Ltd (PTL). The company had obtained Facebook user profile data from 2007 and 2008 under an agreement with Facebook. While their plan might have originally been to create a powerful search engine for Facebook, in 2011 they launched their own social media platform. They used the Facebook data to populate their platform with profiles. Individuals whose profiles were created on the site had the option of ‘claiming’ them. PTL also provided two avenues for individuals who wished to delete the profiles. If an email address had been part of the original data obtained from Facebook and was associated with the PTL profile, a user could log in using that email address and delete the account. If no email address was associated with the profile, the company required individuals to set up a helpdesk ticket and to provide copies of official photo identification. A number of the complainants to the OPC indicated that they were unwilling to share their photo IDs with a company that had already collected, used and disclosed their personal information without their consent. The complainants’ concerns were not simply that their personal information had been taken and used to populate a new social media platform without their consent. They also felt harmed by the fact that the data used by PTL was from 2007-2008, and did not reflect any changes or choices they had since made. One complaint received by the OPC related to the fact that PTL had reproduced a group that had been created on Facebook, but that since had been deleted from Facebook. Within this group, allegations had been made about the complainant that he/she considered defamatory and bullying. The complainant objected to the fact that the group persisted on PTL and that the PTL platform did not permit changes to public groups and the behest of single individuals on the basis that they treated the group description “as part of the profile of every person who has joined that group, therefore modifying the group would be like modifying all of those people’s profiles and we cannot modify their profiles without their consent.” (at para 55) It should be noted that although the data was initially obtained by PTL from Facebook under licence from Facebook, Facebook’s position was that PTL had used the data in violation of the licence terms. Facebook had commenced proceedings against PTL in 2013 which resulted in a settlement agreement. There was some back and forth over whether the terms of the agreement had been met, but no information was available regarding the ultimate resolution. The Finding addresses a number of interesting issues. These include the jurisdiction of the OPC to consider this complaint about a New Zealand based company, the sufficiency of consent, and data retention limits. This post focuses only on the issue of whether social media profiles are “publicly available information” within the meaning of PIPEDA. PTL argued that it was entitled to benefit from the “publicly available information” exception to the requirement for consent for collection and use of personal information because the Facebook profiles of the complainants were “publicly available information”. The OPC disagreed. It noted that the exception for “publicly available information”, found in ss. 7(1)(d) and 7(2)(c.1) of PIPEDA, is defined by regulation. The applicable provision is s. 1(e) of the Regulations Specifying Publicly Available Information, which requires that “the personal information must appear in a publication, the publication must be available to the public, and the personal information has to have been provided by the individual.”(at para 87) The OPC rejected PTL’s argument that “publication” included public Facebook profiles. In its view, the interpretation of “publicly available information” must be “in light of the scheme of the Act, its objects, and the intention of the legislature.” (at para 89) It opined that neither a Facebook profile nor a ‘group’ was a publication. It noted that the regulation makes it clear that “publicly available information” must receive a restrictive interpretation, and reflects “a recognition that information that may be in the public domain is still worthy of privacy protection.” (at para 90) The narrow interpretation of this exception to consent is consistent with the fact that PIPEDA has been found to be quasi-constitutional legislation. In finding that the Facebook profile information was not publicly available information, the OPC considered that the profiles at issue “were created at a time when Facebook was relatively new and its policies were in flux.” (at para 92) Thus it would be difficult to determine that the intention of the individuals who created profiles at that time was to share them broadly and publicly. Further, at the time the profiles were created, they were indexable by search engines by default. In an earlier Finding, the OPC had determined that this default setting “would not have been consistent with users’ reasonable expectations and was not fully explained to users” (at para 92). In addition, the OPC noted that Facebook profiles were dynamic, and that their ‘owners’ could update or change them at will. In such circumstances, “treating a Facebook profile as a publication would be counter to the intention of the Act, undermining the control users otherwise maintain over their information at the source.” (at para 93) This is an interesting point, as it suggests that the dynamic nature of a person’s online profile prevents it from being considered a publication – it is more like an extension of a user’s personality or self-expression. The OPC also noted that even though the profile information was public, to qualify for the exception it had to be contributed by the individual. This is not always the case with profile information – in some cases, for example, profiles will include photographs that contain the personal information of third parties. This Finding, which is not a decision, and not binding on anyone, shows how the OPC interprets the “publicly available information” exception in its home statute. A few things are interesting to note: · The OPC finds that social media profiles (in this case from Facebook) are different from “publications” in the sense that they are dynamic and reflect an individual’s changing self-expression · Allowing the capture and re-use, without consent, of self-expression from a particular point in time, robs the individual not only of control of their personal information by of control over how they present themselves to the public. This too makes profile data different from other forms of “publicly accessible information” such as telephone or business directory information, or information published in newspapers or magazines. · The OPC’s discussion of Facebook’s problematic privacy practices at the time the profiles were created muddies the discussion of “publicly available information”. A finding that Facebook had appropriate rules of consent should not change the fact that social media profiles should not be considered “publicly available information” for the purposes of the exception.
It is also worth noting that a complaint against PTL to the New Zealand Office of the Privacy Commissioner proceeded on the assumption that PTL did not require consent because the information was publicly available. In fact, the New Zealand Commissioner ruled that no breach had taken place. Given the ETHI Report’s recommendation, it is important to keep in mind that the definition of “publicly accessible information” could be modified (although the government’s response to the ETHI report indicates some reservations about the recommendation to change the definition of publicly available information). Because the definition is found in a regulation, a modification would not require legislative amendment. As is clear from the ETHI report, there are a number of industries and organizations that would love to be able to harvest and use social media platform personal information without need to obtain consent. Vigilance is required to ensure that these regulations are not altered in a way that dramatically undermines privacy protection.
Published in
Privacy
Monday, 09 July 2018 06:59
PIPEDA reform should include a comprehensive rewrite
The pressure is on for Canada to amend its Personal Information Protection and Electronic Documents Act. The legislation, by any measure, is sorely out of date and not up to the task of protecting privacy in the big data era. We know this well enough – the House of Commons ETHI Committee recently issued a report calling for reform, and the government, in its response has acknowledge the need for changes to the law. The current and past privacy Commissioners have also repeatedly called for reform, as have privacy experts. There are many deficiencies with the law – one very significant one is the lack of serious measures to enforce privacy obligations. In this regard, a recent private member’s bill proposes amendments that would give the Commissioner much more substantial powers of enforcement. Other deficiencies can be measured against the EU’s General Data Protection Regulation (GDPR). If Canada cannot meet the levels of protection offered by the GDPR, personal data flows from the EU to Canada could be substantially disrupted. Among other things, the GDPR addresses issues such as the right to be forgotten, the right to an explanation of how automated decisions are reached, data portability rights, and many other measures specifically designed to address the privacy challenges of the big data era. There is no doubt that these issues will be the subject of much discussion and may well feature in any proposals to reform PIPEDA that will be tabled in Parliament, perhaps as early as this autumn. The goal of this post is not to engage with these specific issues of reform, as important as they are; rather, it is to tackle another very basic problem with PIPEDA and to argue that it too should be addressed in any legislative reform. Simply put, PIPEDA is a dog’s-breakfast statute that is difficult to read and understand. It needs a top-to-bottom rewriting according to the best principles of plain-language drafting. PIPEDA’s drafting has been the subject of commentary by judges of the Federal Court who have the task of interpreting it. For example, in Miglialo v. Royal Bank of Canada, Justice Roy described PIPEDA as a “a rather peculiar piece of legislation”, and “not an easily accessible statute”. The Federal Court of Appeal in Telus v. Englander observed that PIPEDA was a “compromise as to form” and that “The Court is sometimes left with little, if any guidance at all”. In Johnson v. Bell Canada, Justice Zinn observed: “While Part I of the Act is drafted in the usual manner of legislation, Schedule 1, which was borrowed from the CSA Standard, is notably not drafted following any legislative convention.” In Fahmy v. Royal Bank of Canada, Justice Roy noted that it was “hardly surprising” “[t]hat a party would misunderstand the scope of the Act.” To understand why PIPEDA is such a mess requires some history. PIPEDA was passed by Parliament in 2000. Its enactment followed closely on the heels of the EU’s Data Protection Directive, which, like the GDPR, threatened to disrupt data flows to countries that did not meet minimum standards of private sector data protection. Canada needed private sector data protection legislation and it needed it fast. It was not clear that the federal government really had jurisdiction over private sector data protection, but it was felt that the rapid action needed did not leave time to develop cooperative approaches with the provinces. The private sector did not want such legislation. As a compromise, the government decided to use the CSA Model Code – a voluntary privacy code developed with multi-stakeholder input – as the normative heart of the statute. There had been enough buy-in with the Model Code that the government felt that it avoid excessive pushback from the private sector. The Code, therefore, originally drafted to provide voluntary guidance, was turned into law. The prime minister at the time, the Hon. Jean Chretien, did not want Parliament’s agenda overburdened with new bills, so the data protection bill was grafted onto another bill addressing the completely different issue of electronic documents (hence the long, unwieldy name that gives rise to the PIPEDA acronym). The result is a legislative Frankenstein. Keep in mind that this is a law aimed at protecting individual privacy. It is a kind of consumer-protection statute that should be user-friendly, but it is not. Most applicants to the Federal Court under PIPEDA are self-represented, and they clearly struggle with the legislation. The sad irony is that if a consumer wants to complain to the Privacy Commissioner about a company’s over-long, horribly convoluted, impossible to understand, non-transparent privacy policy, he or she will have to wade through a statute that is like a performance-art parody of that same privacy policy. Of course, the problem is not just one for ordinary consumers. Lawyers and even judges (as evidenced above) find PIPEDA to be impenetrable. By way of illustration, if you are concerned about your privacy rights and want to know what they are, you will not find them in the statute itself. Instead, the normative provisions are in the CSA Model Code, which is appended as Schedule I of the Act. Part I of the Act contains some definitions, a few general provisions, and a whole raft of exceptions to the principle of consent. Section 6.1 tells you what consent means “for the purposes of clause 4.3 of Schedule 1”, but you will have to wait until you get to the schedule to get more details on consent. On your way to the Schedule you might get tangled up in Part II of the Act which is about electronic documents, and thus thoroughly irrelevant. Because the Model Code was just that – a model code – it was drafted in a more conversational style, and includes notes that provide examples and illustrations. For the purposes of the statute, some of these notes were considered acceptable – others not. Hence, you will find the following statement in s. 2(2) of PIPEDA: “In this Part, a reference to clause 4.3 or 4.9 of Schedule 1 does not include a reference to the note that accompanies that clause.” So put a yellow sticky tab on clauses 4.3 and 4.9 to remind you not to consider those notes as part of the law (even though they are in the Schedule). Then there is this: s. 5(2) of PIPEDA tells us: “The word should, when used in Schedule 1, indicates a recommendation and does not impose an obligation.” So use those sticky notes again. Or cross out “should” each of the fourteen times you find it in Schedule 1, and replace it with “may”. PIPEDA also provides in ss. 7(4) and 7(5) that certain actions are permissible despite what is said in clause 4.5 of Schedule 1. Similar revisionism is found in s. 7.4. While clause 4.9 of Schedule 1 talks about requests for access to personal information made by individuals, section 8(1) in Part 1of the Act tells us those requests have to be made in writing, and s. 8 goes on to provide further details on the right of access. Section 9 qualifies the right of access with “Despite clause 4.9 of Schedule 1….”. You can begin to see how PIPEDA may have contributed significantly to the sales of sticky notes. If an individual files a complaint and is not satisfied with the Commissioner’s report of findings, he or she has a right to take the matter to Federal Court if their issue fits within s. 14, which reads:
14 (1) A complainant may, after receiving the Commissioner’s report or being notified under subsection 12.2(3) that the investigation of the complaint has been discontinued, apply to the Court for a hearing in respect of any matter in respect of which the complaint was made, or that is referred to in the Commissioner’s report, and that is referred to in clause 4.1.3, 4.2, 4.3.3, 4.4, 4.6, 4.7 or 4.8 of Schedule 1, in clause 4.3, 4.5 or 4.9 of that Schedule as modified or clarified by Division 1 or 1.1, in subsection 5(3) or 8(6) or (7), in section 10 or in Division 1.1. [My emphasis]
Enough said. There are a number of very important substantive privacy issues brought about by the big data era. We are inevitably going to see PIPEDA reform in the relatively near future, as a means of not only addressing these issues but of keeping us on the right side of the GDPR. As we move towards major PIPEDA reform, however, the government should seriously consider a crisp rewrite of the legislation. The maturity of Canada’s data protection regime should be made manifest in a statute that no longer needs to lean on the crutch of a model code for its legitimacy. Quite apart from the substance of such a document, it should:
· Set out its basic data protection principles in the body of the statute, near the front of the statute, and in a manner that is clear, readable and accessible to a lay public. · Be a free-standing statute that deals with data protection and that does not deal with unrelated extraneous matters (such as electronic documents).
It is not a big ask. British Columbia and Alberta managed to do it when they created their own substantially similar data protection statutes. Canadians deserve good privacy legislation, and they deserve to have it drafted in a manner that is clear and accessible. Rewriting PIPEDA (and hence renaming it) should be part of the coming legislative reform.
Published in
Privacy
Monday, 11 June 2018 10:52
Update on privacy obligations and Canadian political parties
The issue of the application of privacy/data protection laws to political parties in Canada is not new – Colin Bennett and Robin Bayley wrote a report on this issue for the Office of the Privacy Commissioner of Canada in 2012. It gained new momentum in the wake of the Cambridge Analytica scandal when it was brought home to the public in a fairly dramatic way the extent to which personal information might be used not just to profile and target individuals, but to sway their opinions in order to influence the outcome of elections. In the fallout from Cambridge Analytica there have been a couple of recent developments in Canada around the application of privacy laws to political parties. First, the federal government included some remarkably tepid provisions into Bill C-76 on Elections Act reform. These provisions, which I critique here, require parties to adopt and post a privacy policy, but otherwise contain no normative requirements. In other words, they do not hold political parties to any particular rules or norms regarding their collection, use or disclosure of personal information. There is also no provision for independent oversight. The only complaint that can be made – to the Commissioner of Elections – is about the failure to adopt and post a privacy policy. The federal government has expressed surprise at the negative reaction these proposed amendments have received and has indicated a willingness to do something more, but that something has not yet materialized. Meanwhile, it is being reported that the Bill, even as it stands, is not likely to clear the Senate before the summer recess, putting in doubt the ability of any amendments to be in place and implemented in time for the next election. Meanwhile, on June 6 2018, the Quebec government introduced Bill no 188 into the National Assembly. If passed, this Bill would give the Quebec Director General of Elections the duty to examine and evaluate the practices of the provincial political parties’ collection, use and disclosure of personal information. The Director General must also assess their information security practices. If the Bill is passed into law, he will be required to report his findings to the National Assembly no later than the first of October 2019. The Director General will make any recommendations in this report that he feels are appropriate in the circumstances. The Bill also modifies laws applicable to municipal and school board elections so that the Director-General can be directed by the National Assembly to conduct a similar assessment and report back. While this Bill would not make any changes to current practices in the short term, it is clearly aimed at gathering data with a view to informing any future legislative reform that might be deemed necessary.
Published in
Privacy
|
Electronic Commerce and Internet Law in Canada, 2nd EditionPublished in 2012 by CCH Canadian Ltd. Intellectual Property for the 21st CenturyIntellectual Property Law for the 21st Century: Interdisciplinary Approaches |