Teresa Scassa - Blog

[Note: This is my third in a series of posts on the new Bill C-27 which will reform private sector data protection law in Canada and which will add a new Artificial Intelligence and Data Act. The previous two posts addressed consent and de-identification/anonymization.]

In 2018 a furore erupted over media reports that Statistics Canada (StatCan) sought to collect the financial data of a half a million Canadians from Canadian banks to generate statistical data. Reports also revealed that it had already collected a substantial volume of personal financial data from credit agencies. The revelations led to complaints to the Privacy Commissioner, who carried out an investigation and issued an interim and a final report. One outcome was that StatCan worked with the Office of the Privacy Commissioner of Canada to develop a new approach to the collection of such data. Much more recently, there were expressions of public outrage when media reported that the Public Health Agency of Canada (PHAC) had acquired de-identified mobility data about Canadians from Telus in order to inform their response to the COVID-19 pandemic. This led to hearings before the ETHI Standing Committee of the House of Commons, and resulted in a report with a series of recommendations.

Both of these instances involved attempts by government institutions or agencies to make use of existing private sector data to enhance their analyses or decision-making. Good policy is built on good data; we should support and encourage the responsible use of data by government in its decision-making. At the same time, however, there is clearly a deep vein of public distrust in government – particularly when it comes to personal data – that cannot be ignored. Addressing this distrust requires both transparency and strong protection for privacy.

Bill C-27, introduced in Parliament in June 2022, proposes a new Consumer Privacy Protection Act to replace the aging Personal Information Protection and Electronic Documents Act (PIPEDA). As part of the reform, this private sector data protection bill contains provisions that are tailored to address the need of government – as well as the commercial data industry – to access personal data in the hands of the private sector.

Two provisions in C-27 are particularly relevant here: sections 35 and 39. Section 35 deals specifically with the sharing of private sector data for the purposes of statistics and research. Section 7(3)(f) of PIPEDA contains an exception that is similar to s. 35. Section 39 is entirely new. Section 39 deals with the use of data for “socially beneficial purposes”. Both s. 35 and s. 39 were in the predecessor to C-27, Bill C-11. Only section 35 has been changed since C-11 – a small change significantly broadens its scope.

Section 35 of Bill C-27 provides:

35 An organization may disclose an individual’s personal information without their knowledge or consent if

(a) the disclosure is made for statistical purposes or for study or research purposes and those purposes cannot be achieved without disclosing the information;

(b) it is impracticable to obtain consent; and

(c) the organization informs the Commissioner of the disclosure before the information is disclosed.

This provision would enable the kind of data sharing by the private sector that was involved in the StatCan example mentioned above, and that was previously enabled by s. 7(3)(f) of PIPEDA. As currently the case under PIPEDA, s. 35 would allow for the sharing of personal information without an individual’s knowledge or consent. It is important to note that there is no requirement that the personal information be de-identified or anonymized in any way (see my earlier post on de-identification and anonymization here). The remainder of s. 35 imposes the only limitations on such sharing. One of these relates to purpose. The sharing must be for “statistical purposes” (but note that StatCan is not the only organization that engages in statistical activities, and such sharing is not limited to StatCan). It can also be for “study or research purposes”. Bill C-11, like PIPEDA, had referred to “scholarly study or research purposes”. The removal of ‘scholarly’ substantially enlarges the scope of this provision (for example, market research and voter profile research would no doubt count). There is a further qualifier – the statistical, study, or research purposes have to be ones that “cannot be achieved without disclosing the information”. However, they do not have to be ‘socially beneficial’ (although there is an overarching provision in s. 5 that requires that the purposes for collecting, using or disclosing personal information be ones that a ‘reasonable person would consider appropriate in the circumstances’). Section 35(b) (as is the case under PIPEDA’s s. 7(3)(f)) also requires that it be impracticable to obtain consent. This is not really much of a barrier. If you want to use the data of a half a million individuals, for example, it is really not practical to seek their consent. Finally, the organization must inform the Commissioner of the disclosure prior to it taking place. This provides a thin film of transparency. Another nod and a wink to transparency is found in s. 62(2)(b), which requires organizations to provide a ‘general account’ of how they apply “the exceptions to the requirement to obtain an individual’s consent under this Act”.

Quebec’s Loi 25 also addresses the use of personal information in the hands of the private sector for statistical and research purposes without individual consent. Unlike Bill C-27, it contains more substantive guardrails:

21. A person carrying on an enterprise may communicate personal information without the consent of the persons concerned to a person or body wishing to use the information for study or research purposes or for the production of statistics.

The information may be communicated if a privacy impact assessment concludes that

(1) the objective of the study or research or of the production of statistics can be achieved only if the information is communicated in a form allowing the persons concerned to be identified;

(2) it is unreasonable to require the person or body to obtain the consent of the persons concerned;

(3) the objective of the study or research or of the production of statistics outweighs, with regard to the public interest, the impact of communicating and using the information on the privacy of the persons concerned;

(4) the personal information is used in such a manner as to ensure confidentiality; and

(5) only the necessary information is communicated.

The requirement of a privacy impact assessment (PIA) in Loi 25 is important, as is the condition that this assessment consider the goals of the research or statistical activity in relation to the public interest and to the impact on individuals. Loi 25 also contains important limitations on how much information is shared. Bill C-27 addresses none of these issues. At the very least, as is the case under Quebec law, there should be a requirement to conduct a PIA with similar considerations – and to share it with the Privacy Commissioner. Since this is data sharing without knowledge or consent, there could even be a requirement that the PIAs be made publicly available, with appropriate redactions if necessary.

Some might object that there is no need to incorporate these safeguards in the new private sector data protection law since those entities (such as StatCan) who receive the data have their own secure policies and practices in place to protect data. However, under s. 35 there is no restriction on who may receive data for statistical, study or research purposes, and no reason to assume that they have appropriate safeguards in place. If they do, then the PIA can reflect this.

Section 39 addresses the sharing of de-identified personal information for socially beneficial purposes. Presumably, this would be the provision under which, in the future, mobility data might be shared with an agency such as PHAC. Under s. 39:

39 (1) An organization may disclose an individual’s personal information without their knowledge or consent if

(a) the personal information is de-identified before the disclosure is made;

(b) the disclosure is made to

(i) a government institution or part of a government institution in Canada,

(ii) a health care institution, post-secondary educational institution or public library in Canada,

(iii) any organization that is mandated, under a federal or provincial law or by contract with a government institution or part of a government institution in Canada, to carry out a socially beneficial purpose, or

(iv) any other prescribed entity; and

(c) the disclosure is made for a socially beneficial purpose.

(2) For the purpose of this section, socially beneficial purpose means a purpose related to health, the provision or improvement of public amenities or infrastructure, the protection of the environment or any other prescribed purpose.

This provision requires that shared information must be de-identified, although as noted in my earlier post, de-identification in Bill C-27 no longer means what it did in C-11. The data shared may have only direct identifiers removed leaving individuals easily identifiable. The disclosure must be for socially beneficial purposes, and it must be to a specified or prescribed entity. I commented on the identical provision in C-11 here, so I will not repeat in detail those earlier concerns from that post. They remain unaddressed in Bill C-27. The most significant gap is the lack of a requirement for a data governance agreement to be in place between the parties based upon the kinds of considerations that would be relevant in a privacy impact assessment.

Where the sharing is to be with a federal government institution, the Privacy Act should provide additional protection. However, the Privacy Act is itself an antediluvian statute that has long been in need of reform. It is worth noting that while the doors to data sharing are opened in Bill C-27, many of the necessary safeguards – at least where government is concerned – are left for another statute in the hands of another department, and that lies who-knows-where in the government’s legislative agenda (although rumours are that we might see a Bill this fall [Warning: holding your breath could be harmful to your health.]). In its report on the sharing of mobility data with PHAC, ETHI calls for much greater transparency about data use on the part of the Government of Canada, and also calls for enhanced consultation with the Privacy Commissioner prior to engaging in this form of data collection. Apart from the fact that these pieces will not be in place – if at all – until the Privacy Act is reformed, the exceptions in sections 35 and 39 of C-27 apply to organizations and institutions outside the federal government, and thus, can involve institutions and entities not subject to the Privacy Act. Guardrails should be included in C-27 (as they are, for example, in Loi 25); yet, they are absent.

As noted earlier, there are sound reasons to facilitate the use of personal data to aid in data-driven decision-making that serves the public interest. However, any such use must protect individual privacy. Beyond this, there is also a collective privacy dimension to the sharing of even anonymized human-derived data. This should also not be ignored. It requires greater transparency and public engagement, along with appropriate oversight by the Privacy Commissioner. Bill C-27 facilitates use without adequately protecting privacy – collective or individual. Given the already evident lack of trust in government, this seems either tone-deaf or deeply cynical.

 

 

 

 

 

 

 

Published in Privacy

This is the second post in a series on Bill C-27, a bill introduced in Parliament in June 2022 to reform Canada's private sector data protection law. The first post, on consent provisions, is found here.

In a data-driven economy, data protection laws are essential to protect privacy. In Canada, the proposed Consumer Privacy Protection Act in Bill C-27 will, if passed, replace the aging Personal Information Protection and Electronic Documents Act (PIPEDA) to govern the collection, use and disclosure of personal information by private sector organizations. Personal information is defined in Bill C-27 (as it was in PIPEDA) as “information about an identifiable individual”. The concept of identifiability of individuals from information has always been an important threshold issue for the application of the law. According to established case law, if an individual can be identified directly or indirectly from data, alone or in combination with other available data, then those data are personal information. Direct identification comes from the presence of unique identifiers that point to specific individuals (for example, a name or a social insurance number). Indirect identifiers are data that, if combined with other available data, can lead to the identification of individuals. To give a simple example, a postal code on its own is not a direct identifier of any particular individual, but in a data set with other data elements such as age and gender, a postal code can lead to the identification of a specific individual. In the context of that larger data set, the postal code can constitute personal information.

As the desire to access and use more data has grown in the private (and public) sector, the concepts of de-identification and anonymization have become increasingly important in dealing with personal data that have already been collected by organizations. The removal of both direct and indirect identifiers from personal data can protect privacy in significant ways. PIPEDA did not define ‘de-identify’, nor did it create particular rules around the use or disclosure of de-identified information. Bill C-11, the predecessor to C-27, addressed de-identified personal information, and contained the following definition:

de-identify means to modify personal information — or create information from personal information — by using technical processes to ensure that the information does not identify an individual or could not be used in reasonably foreseeable circumstances, alone or in combination with other information, to identify an individual

This definition was quite inclusive (information created from personal information, for example, would include synthetic data). Bill C-11 set a relative standard for de-identification – in other words, it accepted that de-identification was sufficient if the information could not be used to identify individuals “in reasonably foreseeable circumstances”. This was reinforced by s. 74 which required organizations that de-identified personal information to use measures that were proportionate to the sensitivity of the information and the way in which the information was to be used. De-identification did not have to be perfect – but it had to be sufficient for the context.

Bill C-11’s definition of de-identification was criticized by private sector organizations that wanted de-identified data to fall outside the scope of the Act. In other words, they sought either an exemption from the application of the law for de-identified personal information, or a separate category of “anonymized” data that would be exempt from the law. According to this view, if data cannot be linked to an identifiable individual, then they are not personal data and should not be subject to data protection law. For their part, privacy advocates were concerned about the very real re-identification risks, particularly in a context in which there is a near endless supply of data and vast computing power through which re-identification can take place. These concerns are supported by research (see also here and here). The former federal Privacy Commissioner recommended that it be made explicit that the legislation would apply to de-identified data.

The changes in Bill C-27 reflect the power of the industry lobby on this issue. Bill C-27 creates separate definitions for anonymized and de-identified data. These are:

anonymize means to irreversibly and permanently modify personal information, in accordance with generally accepted best practices, to ensure that no individual can be identified from the information, whether directly or indirectly, by any means.

[. . .]

de-identify means to modify personal information so that an individual cannot be directly identified from it, though a risk of the individual being identified remains. [my emphasis]

Organizations will therefore be pleased that there is now a separate category of “anonymized” data, although such data must be irreversibly and permanently modified to ensure that individuals are not identifiable. This is harder than it sounds; there is, even with synthetic data, for example, still some minimal risk of reidentification. An important concern, therefore, is whether the government is actually serious about this absolute standard, whether it will water it down by amendment before the bill is enacted, or whether it will let interpretation and argument around ‘generally accepted best practices’ soften it up. To ensure the integrity of this provision, the law should enable the Privacy Commissioner to play a clear role in determining what counts as anonymization.

Significantly, under Bill C-27, information that is ‘anonymized’ would be out of scope of the statute. This is made clear in a new s. 6(5) which provides that “this Act does not apply in respect of personal information that has been anonymized”. The argument to support this is that placing data that are truly anonymized out of scope of the legislation creates an incentive for industry to anonymize data, and anonymization (if irreversible and permanent) is highly privacy protective. Of course, similar incentives can be present if more tailored exceptions are created for anonymized data without it falling ‘out of scope’ of the law.

Emerging and evolving concepts of collective privacy take the view that there should be appropriate governance of the use of human-derived data, even if it has been anonymized. Another argument for keeping anonymized data in scope relates to the importance of oversight, given re-identification risks. Placing anonymized data outside the scope of data protection law is contrary to the recent recommendations of the ETHI Standing Committee of the House of Commons following its hearings into the use of de-identified private sector mobility data by the Public Health Agency of Canada. ETHI recommended that the federal laws be amended “to render these laws applicable to the collection, use, and disclosure of de-identified and aggregated data”. Aggregated data is generally considered to be data that has been anonymized. The trust issues referenced by ETHI when it comes to the use of de-identified data reinforce the growing importance of notions of collective privacy. It might therefore make sense to keep anonymized data within scope of the legislation (with appropriate exceptions to maintain incentives for anonymization) leaving room for governance of anonymization.

Bill C-27 also introduces a new definition of “de-identify”, which refers to modifying data so that individuals cannot be directly identified. Direct identification has come to mean identification through specific identifiers such as names, or assigned numbers. The new definition of ‘de-identify’ in C-27 suggests that simply removing direct identifiers will suffice to de-identify personal data (a form of what, in the GDPR, is referred to as pseudonymization). Thus, according to this definition, as long as direct identifiers are removed from a data set, an organization can use data without knowledge or consent in certain circumstances, even though specific individuals might still be identifiable from those data. While it will be argued that these circumstances are limited, the exception for sharing for ‘socially beneficial purposes’ is disturbingly broad given this weak definition (more to come on this in a future blog post). In addition, the government can add new exceptions to the list by regulation.

The reference in the definition of ‘de-identify’ only to direct identification is meant to be read alongside s. 74 of Bill C-27, which provides:

74 An organization that de-identifies personal information must ensure that any technical and administrative measures applied to the information are proportionate to the purpose for which the information is de-identified and the sensitivity of the personal information.

Section 74 remains unchanged from Bill C-11, where it made more sense, since it defined de-identification in terms of direct or indirect identifiers using a relative standard. In the context of the new definition of ‘de-identify’, it is jarring, since de-identification according to the new definition requires only the removal of direct identifiers. What this, perhaps, means is that although the definition of de-identify only requires removal of direct identifiers, actual de-identification might mean something else. This is not how definitions are supposed to work.

In adopting these new definitions, the federal government sought to align its terminology with that used in Quebec’s Loi 25 that reformed its public and private sector data protection laws. The Quebec law provides, in a new s. 23, that:

[. . .]

For the purposes of this Act, information concerning a natural person is anonymized if it is, at all times, reasonably foreseeable in the circumstances that it irreversibly no longer allows the person to be identified directly or indirectly.

Information anonymized under this Act must be anonymized according to generally accepted best practices and according to the criteria and terms determined by regulation.

Loi 25 also provides that data is de-identified (as opposed to anonymized) “if it no longer allows the person concerned to be directly identified”. At first glance, it seems that Bill C-27 has adopted similar definitions – but there are differences. First, the definition of anonymization in Loi 25 uses a relative standard (not an absolute one as in C-27). It also makes specific reference not just to generally accepted best practices, but to criteria and terms to be set out in regulation, whereas in setting standards for anonymization, C-27 refers only to “generally accepted best practices”. [Note that in its recommendations following its hearings into the use of de-identified private sector mobility data by the Public Health Agency of Canada, the ETHI Committee of Parliament recommended that federal data protection laws should include “a standard for de-identification of data or the ability for the Privacy Commissioner to certify a code of practice in this regard.”]

Second, and most importantly, in the Quebec law, anonymized data does not fall outside the scope of the legislation –instead, a relative standard is used to provide some flexibility while still protecting privacy. Anonymized data are still subject to governance under the law, even though the scope of that governance is limited. Further, under the Quebec law, recognizing that the definition of de-identification is closer to pseudonymization, the uses of de-identified data are more restricted than they are in Bill C-27.

Further, in an eye-glazing bit of drafting, s. 2(3) of Bill C-27 provides:

2(3) For the purposes of this Act, other than sections 20 and 21, subsections 22(1) and 39(1), sections 55 and 56, subsection 63(1) and sections 71, 72, 74, 75 and 116, personal information that has been de-identified is considered to be personal information.

This is a way of saying that de-identified personal information remains within the scope of the Act except where it does not. Yet, data that has only direct identifiers stripped from it should always be considered personal information, since the reidentification risk, as noted above, could be very high. What s. 2(3) does is allow de-identified data to be treated as anonymized (out of scope) in some circumstances. For example, s. 21 allows organizations to use ‘de-identified’ personal information for internal research purposes without knowledge or consent. The reference in s. 2(3) amplifies this by providing that such information is not considered personal information. As a result, presumably, other provisions in Bill C-27 would not apply. This might include data breach notification requirements – yet if information is only pseudonymized and there is a breach, it is not clear why such provisions should not apply. Pseudonymization might provide some protection to those affected by a breach, although it is also possible that the key was part of the breach, or that individuals remain re-identifiable in the data. The regulator should have jurisdiction. Subsection 22(1) allows for the use and even the disclosure of de-identified personal information between parties to a prospective business transaction. In this context, the de-identified information is not considered personal information (according to s. 2(3)) and so the only safeguards are those set out in s. 22(1) itself. Bizarrely, s. 22(1) makes reference to the sensitivity of the information – requiring safeguards appropriate to its sensitivity, even though it is apparently not considered personal information. De-identified (not anonymized) personal information can also be shared without knowledge or consent for socially beneficial purposes under s. 39(1). (I have a blog post coming on this provision, so I will say no more about it here, other than to note that given the definition of ‘de-identify’, such sharing seems rash and the safeguards provided are inadequate). Section 55 provides for a right of erasure of personal information; since information stripped of direct identifiers is not personal information for the purposes of section 55 (according to s. 2(3)), this constitutes an important limitation on the right of erasure. If data are only pseudonymized, and if the organization retains the key, then why is there no right of erasure? Section 56 addresses the accuracy of personal information. Personal information de-identified according to the definition in C-27 would also be exempted from this requirement.

In adopting the definitions of ‘anonymize’ and ‘de-identify’, the federal government meets a number of public policy objectives. It enhances the ability of organizations to make use of data. It also better aligns the federal law with Quebec’s law (at least at the definitional level). The definitions may also create scope for other privacy protective technologies such as pseudonymization (which is what the definition of de-identify in C-27 probably really refers to) or different types of encryption. But the approach it has adopted creates the potential for confusion, for risks to privacy, and for swathes of human-derived data to fall ‘outside the scope’ of data protection law. The government view may be that, once you stir all of Bill C-27’s provisions into the pot, and add a healthy dose of “trust us”, the definition of “de-identify” and its exceptions are not as problematic as they are at first glance. Yet, this seems like a peculiar way to draft legislation. The definition should say what it is supposed to say, rather than have its defects mitigated by a smattering of other provisions in the law and faith in the goodness of others and the exceptions still lean towards facilitating data use rather than protecting privacy.

In a nutshell, C-27 has downgraded the definition of de-identification from C-11. It has completely excluded from the scope of the Act anonymized data, but has provided little or no guidance beyond “generally accepted best practices” to address anonymization. If an organization claims that their data are anonymized and therefore outside of the scope of the legislation, it will be an uphill battle to get past the threshold issue of anonymization in order to have a complaint considered under what would be the new law. The organization can simply dig in and challenge the jurisdiction of the Commissioner to investigate the complaint.

All personal data, whether anonymized or ‘de-identified’ should remain within the scope of the legislation. Specific exceptions can be provided where necessary. Exceptions in the legislation for the uses of de-identified information without knowledge or consent must be carefully constrained and reinforced with safeguards. Further, the regulator should play a role in establishing standards for anonymization and de-identification. This may involve consultation and collaboration with standards-setting bodies, but references in the legislation must be to more than just “generally accepted best practices”.

Published in Privacy

Note: this is the first in a series of blog posts on Bill C-27, also known as An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act.

Bill C-27 is a revised version of the former Bill C-11 which died on the order paper just prior to the last federal election in 2021. The former Privacy Commissioner called Bill C-11 ‘a step backwards’ for privacy, and issued a series of recommendations for its reform. At the same time, industry was also critical of the Bill, arguing that it risked making the use of data for innovation too burdensome.

Bill C-27 takes steps to address the concerns of both privacy advocates and those from industry with a series of revisions, although there is much that is not changed from Bill C-11. Further, it adds an entirely new statute – the Artificial Intelligence and Data Act (AIDA) – meant to govern some forms of artificial intelligence. This series of posts will assess a number of the changes found in Bill C-27. It will also consider the AIDA.

_________________________________

The federal government has made it clear that it considers consent to be a cornerstone of Canadian data protection law. They have done so in the Digital Charter, in Bill C-11 (the one about privacy), and in the recent reincarnation of data protection reform legislation in Bill C-27. On the one hand, consent is an important means by which individuals can exercise control over their personal information; on the other hand, it is widely recognized that the consent burden has become far too high for individuals who are confronted with long, complex and often impenetrable privacy policies at every turn. At the same time, organizations that see new and emerging uses for already-collected data seek to be relieved of the burden of obtaining fresh consents. The challenge in privacy law reform has therefore been to make consent meaningful, while at the same time reducing the consent burden and enabling greater use of data by private and public sector entities. Bill C-11 received considerable criticism for how it dealt with consent (see, for example, my post here, and the former Privacy Commissioner’s recommendations to improve consent in C-11 here). Consent is back, front and centre in Bill C-27, although with some important changes.

Section 15 of Bill C-27 reaffirms that consent is the default rule for collection, use or disclosure of personal information, although the statute creates a long list of exceptions to this general rule. One criticism of Bill C-11 was that it removed the definition of consent in s. 6.1 of PIPEDA, which provided that consent “is only valid if it is reasonable to expect that an individual to whom the organization’s activities are directed would understand the nature, purpose and consequences of the collection, use or disclosure of the personal information to which they are consenting.” Instead, Bill C-11 simply relied upon a list of information that must be provided to individuals prior to consent. Bill C-27’s compromise is found in the addition of a new s. 15(4) which requires that the information provided to individuals to obtain their consent must be “in plain language that an individual to whom the organization’s activities are directed would reasonably be expected to understand.” This has the added virtue of ensuring, for example, that privacy policies for products or services directed at youth or children must take into account the sophistication of their audience. The added language is not as exigent as s. 6.1 (for example, s. 6.1 requires an understanding of the nature, purpose and consequences of the collection, use and disclosure, while s. 15(4) requires only an understanding of the language used), so it is still a downgrading of consent from the existing law. It is, nevertheless, an improvement over Bill C-11.

A modified s. 15(5) and a new s. 15(6) also muddy the consent waters. Subsection 15(5) provides that consent must be express unless it is appropriate to imply consent. The exception to this general rule is the new subsection 15(6) which provides:

(6) It is not appropriate to rely on an individual’s implied consent if their personal information is collected or used for an activity described in subsection 18(2) or (3).

Subsections 18(2) and (3) list business activities for which personal data may be collected or used without an individual’s knowledge or consent. At first glance, it is unclear why it is necessary to provide that implied consent is inappropriate in such circumstances, since no consent is needed at all. However, because s. 18(1) sets out certain conditions criteria for collection without knowledge or consent, it is likely that the goal of s. 15(6) is to ensure that no organization circumvents the limited guardrails in s. 18(1) by relying instead on implied consent. The potential breadth of s. 18(3) (discussed below), combined with s. 2(3) makes it difficult to distinguish between the two, in which case, the cautious organization will comply with s. 18(3) rather than rely on implied consent in any event.

The list of business activities for which no knowledge or consent is required for the collection or use of personal information is pared down from that in Bill C-11. The list in C-11 was controversial, as it included some activities which were so broadly stated that they would have created gaping holes in any consent requirement (see my blog post on consent in C-11 here). The worst of these have been removed. This is a positive development, although the provision creates a backdoor through which other exceptions can be added by regulation. Further, Bill C-27 has added language to s. 12(1) to clarify that the requirement that the collection, use or disclosure of personal information must be “only in a manner and for purposes that a reasonable person would consider appropriate in the circumstances” applies “whether or not consent is required under this Act.”

[Note that although the exceptions in s. 18 are to knowledge as well as consent, s. 62(2)(b) of Bill C-27 will require that an organization provide plain language information about how it makes use of personal information, and how it relies upon exceptions to consent “including a description of any activities referred to in subsection 18(3) in which it has a legitimate interest”.]

Bill C-27 does, however, contain an entirely new exception to the collection or use of personal data with knowledge or consent. This is found in s. 18(3):

18 (3) An organization may collect or use an individual’s personal information without their knowledge or consent if the collection or use is made for the purpose of an activity in which the organization has a legitimate interest that outweighs any potential adverse effect on the individual resulting from that collection or use and

(a) a reasonable person would expect the collection or use for such an activity; and

(b) the personal information is not collected or used for the purpose of influencing the individual’s behaviour or decisions.

So as not to leave this as open-ended as it seems at first glance, a new s. 18(4) sets conditions precedent for the collection or use of personal information for ‘legitimate purposes’:

(4) Prior to collecting or using personal information under subsection (3), the organization must

(a) identify any potential adverse effect on the individual that is likely to result from the collection or use;

(b) identify and take reasonable measures to reduce the likelihood that the effects will occur or to mitigate or eliminate them; and

(c) comply with any prescribed requirements.

Finally, a new s. 18(5) requires the organization to keep a record of its assessment under s. 18(4) and it must be prepared to provide a copy of this assessment to the Commissioner at the Commissioner’s request.

It is clear that industry had the ear of the Minister when it comes to the addition of ss. 18(3). A ‘legitimate interest’ exception was sought in order to enable the use of personal data without consent in a broader range of circumstances. Such an exception is found in the EU’s General Data Protection Regulation (GDPR). Here is how it is worded in the GDPR:

6(1) Processing shall be lawful only if and to the extent that at least one of the following applies:

[. . . ]

(f) processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data, in particular where the data subject is a child.

Under the GDPR, an organization that relies upon legitimate interests instead of consent, must take into account, among other things:

6(4) [. . . ]

(a) any link between the purposes for which the personal data have been collected and the purposes of the intended further processing;

(b) the context in which the personal data have been collected, in particular regarding the relationship between data subjects and the controller;

(c) the nature of the personal data, in particular whether special categories of personal data are processed, pursuant to Article 9, or whether personal data related to criminal convictions and offences are processed, pursuant to Article 10;

(d) the possible consequences of the intended further processing for data subjects;

(e) the existence of appropriate safeguards, which may include encryption or pseudonymisation.

Bill C-27’s ‘legitimate interests’ exception is different in important respects from that in the GDPR. Although Bill C-27 gives a nod to the importance of privacy as a human right in a new preamble, the human rights dimensions of privacy are not particularly evident in the body of the Bill. The ‘legitimate interests’ exception is available unless there is an “adverse effect on the individual” that is not outweighed by the organization’s legitimate interest (as opposed to the ‘interests or fundamental freedoms of the individual’ under the GDPR). Presumably it will be the organization that does this initial calculation. One of the problems in data protection law has been quantifying adverse effects on individuals. Data breaches, for example, are shocking and distressing to those impacted, but it is often difficult to show actual damages flowing from the breach, and moral damages have been considerably restricted by courts in many cases. Some courts have even found that ordinary stress and inconvenience of a data breach is not compensable harm since it has become such a routine part of life. If ‘adverse effects’ on individuals are reduced to quantifiable effects, the ‘legitimate interests’ exception will be far too broad.

This is not to say that the ‘legitimate interests’ provision in Bill C-27 is incapable of facilitating data use while at the same time protecting individuals. There is clearly an attempt to incorporate some checks and balances, such as reasonable expectations and a requirement to identify and mitigate any adverse effects. But what C-27 does is take something that, in the GDPR, was meant to be quite exceptional to consent and make it potentially a more mainstream basis for the use of personal data without knowledge or consent. It is able to do this because rather than reinforce the centrality and importance of privacy rights, it places privacy on an uneasy par with commercial interests in using personal data. The focus on ‘adverse effects’ runs the risk of equating privacy harm with quantifiable harm, thus trivializing the human and social value of privacy.

 

 

Published in Privacy

 

On March 30, 2022 Alberta introduced Bill 13, the Financial Innovation Act. The Bill aims to create a regulatory sandbox for innovators in the growing financial technology (fintech) sector. This is a sector in which there is already considerable innovation and development – with more to come as Canada moves towards open banking. (Canada just appointed a new open banking lead on March 22, 2022). In addition to open banking, we are seeing a proliferation of cryptocurrencies, growing interest in central bank digital currencies, and platform-based digital currencies.

The concept of a regulatory sandbox is gaining traction in different sectors. Some forms of innovation in the new digital and data-driven economy run up against regulatory frameworks designed for more conventional forms of technological development. The existing regulatory system becomes a barrier to innovation – not because the innovation is necessarily harmful or undesirable, but simply because it does not fit easily within the conventional framework. A regulatory sandbox is meant to give innovators some regulatory flexibility to develop their products or services, while at the same time allowing regulators to experiment with tailoring regulation to the emerging technological environment.

Some examples of regulatory sandboxes in Canada include one developed by the Canadian Securities Administration largely for the emerging fintech sector (the CSA Regulatory Sandbox), a Health Canada regulatory sandbox for advanced therapeutic products, and the Law Society of Ontario’s legal tech regulatory sandbox. These are sandboxes developed by regulatory bodies which provide flexibility within their existing regulatory frameworks. What is different about Alberta’s Bill 13 is that it legislates a broader regulatory sandbox. The Bill provides for qualified participants to receive exemptions from rules within multiple existing regulatory frameworks, including rules under the Loan & Trust Corporation Act and the Credit Union Act (among others – see s. 8 of the Bill)– as well as provincial privacy legislation.

Access to and use of personal data will be necessary for fintech apps, and existing privacy legislation can create challenges in this context. Certainly, for open banking to work in Canada, the federal government’s Personal Information Protection and Electronic Documents Act will need to be amended. Bill C-11, which died on the order paper in late 2021 contained an amendment that would have allowed for the creation of sector-specific data mobility frameworks via regulation. An amendment of this kind, for example, would have facilitated open banking. With such an approach, privacy protection is not abandoned; rather, it is customized.

Alberta’s Bill 13 appears to be designed to provide some form of customization in order to protect privacy while facilitating innovation. Section 5 of the Bill provides that when a company seeks an exemption from provisions of the Personal Information Protection Act (PIPA), this application for exemption must be reviewed by Alberta’s Information and Privacy Commissioner. The Commissioner is empowered to require the company to provide it with all necessary information to assess the request. The Commissioner may then approve or deny the exemption outright, or approve it subject to terms and conditions. The Commissioner may also withdraw any previously granted approval. The role of the IPC is thus firmly embedded in the legislation. Section 8, which empowers the Minister to grant a certificate of acceptance to a sandbox participant, provides that the Minister may grant an exemption to any provision of PIPA only with the prior written approval of the Commissioner and only on terms and conditions jointly agreed to by the Minister and the Commissioner. Similarly, the Minister’s power to add, amend or revoke an exemption to PIPA in s. 10(4) of the Act can only be exercised in conjunction with the Information and Privacy Commissioner. The Commissioner retains the power to withdraw a written approval (s. 10(5)) and doing so will require the Minister to promptly revoke the exemption.

Bill 13 also provides for transparency with respect to regulatory sandbox exemptions via requirements to publish information about sandbox participants, exemptions, terms and conditions imposed on them, expiry dates, and any amendments, revocations or cancellations of certificates of acceptance.

Given the federal-provincial division of powers, the scope of Bill 13 is somewhat limited, as it cannot provide exemptions to federal regulatory requirements. While Credit Unions are under provincial jurisdiction, banks are federally regulated, and the federal private sector data protection law – PIPEDA – also applies to interprovincial flows of data. Nevertheless, s. 19 of the Bill provides for reciprocal agreements between Alberta and “other governments that have a regulatory sandbox framework, or agencies of those other governments”. There is room here for collaboration and co-operation.

Bill 13 is clearly designed to attract fintech startups to Alberta by providing a more supple regulatory environment in which to operate. This is an interesting bill, and one to watch as it moves through the legislature in Alberta. Not only is it a model for a legislated regulatory sandbox its approach to addressing privacy issues is worth some examination.

Published in Privacy

 

On February 28, 2022, the Ontario government introduced Bill 88, titled: An Act to enact the Digital Platform Workers’ Rights Act, 2022 and to amend various Acts. The Bill is now at the second reading stage.

Most of the attention received by the bill has been directed towards provisions that establish new rights for digital platform workers. The focus of this post is on a set of amendments relating to electronic monitoring of employees.

Bill 88 will amend the Employment Standards Act, 2000 to require employers with more than 25 employees to put in place written policies regarding employee monitoring. The policies must specify whether the employer monitors employees electronically, how and in what circumstances it does so, and for what purposes. Policies must include the date that they were prepared along with any dates of amendment. Regulations may also specify additional information to be contained in the policies. Employers will also have to provide – within set time limits – copies of the policy to each employee, as well as copies of any policies that have been revised or updated. There are policy record-keeping requirements as well.

The term “electronic monitoring” is not defined in the Bill, and there may be issues regarding its scope. Certainly, it would seem likely that audio and video surveillance, as well as key-stroke monitoring and other forms of digital surveillance would be captured by the concept. Less obvious to some employers might be things such as access cards that allow employees to enter and access certain areas of the workplace. Such cards track employee movements, and thus may also count as electronic monitoring. Beyond this, the bill provides significant scope for changes to obligations via regulation – the government may exempt employees from the requirement to have policies for certain forms of electronic monitoring in specified circumstances. Regulations may also prohibit some forms of electronic monitoring.

Given the extent to which employees are increasingly subject to electronic monitoring in the workplace – including in work-from-home contexts – these new provisions are welcome. They will provide employees with a right to know how and when they are being digitally monitored and for what purposes. However, the rights do not go much beyond this. Employees can only complain if they do not receive a copy of their employer’s policy within the specified timelines; the bill states that “a person may not file a complaint alleging a contravention of any other provision of this section or have such a complaint investigated” (s. 41.1.1(6)). Further, the bill places no limits on what employers may do with the information gathered. Section 41.1.1(7) provides: “nothing in this section affects or limits an employer’s ability to use information obtained through electronic monitoring of its employees”.

In 2021, the Ontario government floated the idea of enacting its own private sector data protection law. Such a law would have most likely included provisions protecting employee workplace privacy. Indeed, the province’s White Paper proposed the following:

An organization may collect, use or disclose personal information about an employee if the information is collected, used or disclosed solely for the purposes of,

(a) establishing, managing or terminating an employment or volunteer-work relationship between the organization and the individual; or

(b) managing a post-employment or post-volunteer-work relationship between the organization and the individual.

Although such a provision gives significant room for employers to collect data about their employees, including through electronic means, there is at least a purpose limitation that is absent from the Bill 88 amendments. Including employee personal information under a general data protection law would also have brought with it other protections contained within such legislation, including the right to complain of any perceived breach. All employees – not just those in work forces of 25 or more employees would have some rights with respect to data collected through electronic surveillance; such information would have to be collected, used or disclosed solely for the specified workplace-related purposes. Such an obligation would also be measurable against the general reasonableness requirement in privacy legislation.

The amendments to the Employment Standards Act, 2000 to address electronic surveillance of employees are better than nothing at all. Yet they do not go nearly as far as privacy legislation would in protecting employees’ privacy rights and in providing them with some recourse if they feel that employment surveillance goes beyond what is reasonably required in the employment context. With a provincial election looming it is highly unlikely that we will see a private sector data protection law introduced in the near future. One might also wonder whether the current government has lost its appetite entirely for such a move. In its submissions on the province’s White Paper, for example, the Ontario Chamber of Commerce chastised the province for considering the introduction of privacy legislation that would impose an additional burden on businesses at a time when they were seeking to recover from the effects of the pandemic. They advocated instead for reform to the federal government’s private sector data protection law which would build on the existing law and provide some level of national harmonization. Yet there are places where the federal law does not and cannot reach – and employment outside of federal sectors is one of them. Privacy protections for workers in Ontario must be grounded in provincial law; the proposed changes to the Employment Standards Act, 2000 fall far short of what a basic privacy law would provide.

Published in Privacy

 

On December 7, 2021, the privacy commissioners of Quebec, British Columbia and Alberta issued orders against the US-based company Clearview AI, following its refusal to voluntarily comply with the findings in the joint investigation report they issued along with the federal privacy commissioner on February 3, 2021.

Clearview AI gained worldwide attention in early 2020 when a New York Times article revealed that its services had been offered to law enforcement agencies for use in a largely non-transparent manner in many countries around the world. Clearview AI’s technology also has the potential for many different applications including in the private sector. It built its massive database of over 10 billion images by scraping photographs from publicly accessible websites across the Internet, and deriving biometric identifiers from the images. Users of its services upload a photograph of a person. The service then analyzes that image and compares it with the stored biometric identifiers. Where there is a match, the user is provided with all matching images and their metadata, including links to the sources of each image.

Clearview AI has been the target of investigation by data protection authorities around the world. France’s Commission Nationale de l'Informatique et des Libertés has found that Clearview AI breached the General Data Protection Regulation (GDPR). Australia and the UK conducted a joint investigation which similarly found the company to be in violation of their respective data protection laws. The UK commissioner has since issued a provisional view, stating its intent to levy a substantial fine. Legal proceedings are currently underway in Illinois, a state which has adopted biometric privacy legislation. Canada’s joint investigation report issued by the federal, Quebec, B.C. and Alberta commissioners found that Clearview AI had breached the federal Personal Information Protection and Electronic Documents Act, as well as the private sector data protection laws of each of the named provinces.

The Canadian joint investigation set out a series of recommendations for Clearview AI. Specifically, it recommended that Clearview AI cease offering its facial recognition services in Canada, “cease the collection, use and disclosure of images and biometric facial arrays collected from individuals in Canada”, and delete any such data in its possession. Clearview AI responded by saying that it had temporarily ceased providing its services in Canada, and that it was willing to continue to do so for a further 18 months. It also indicated that if it offered services in Canada again, it would require its clients to adopt a policy regarding facial recognition technology, and it would offer an audit trail of searches.

On the second and third recommendations, Clearview AI responded that it was simply not possible to determine which photos in its database were of individuals in Canada. It also reiterated its view that images found on the Internet are publicly available and free for use in this manner. It concluded that it had “already gone beyond its obligations”, and that while it was “willing to make some accommodations and met some of the requests of the Privacy Commissioners, it cannot commit itself to anything that is impossible and or [sic] required by law.” (Letter reproduced at para 3 of Order P21-08).

In this post I consider three main issues that flow from the orders issued by the provincial commissioners. The first relates to the cross-border reach of Canadian law. The second relates to enforcement (or lack thereof) in the Canadian context, particularly as compared with what is available in other jurisdictions such as the UK and the EU. The third issue relates to the interest shown by the commissioners in a compromise volunteered by Clearview AI in the ongoing Illinois litigation – and what this might mean for Canadians’ privacy.

 

1. Jurisdiction

Clearview AI maintains that Canadian laws do not apply to it. It argues that it is a US-based company with no physical presence in Canada. Although it initially provided its services to Canadian law enforcement agencies (see this CBC article for details of the use of Clearview by Toronto Police Services), it had since ceased to do so – thus, it no longer had clients in Canada. It scraped its data from platform companies such as Facebook and Instagram, and while many Canadians have accounts with such companies, Clearview’s scraping activities involved access to data hosted on platforms outside of Canada. It therefore argued that it not only did not operate in Canada, it had no ‘real and substantial’ connection to Canada.

The BC Commissioner did not directly address this issue. In his Order, he finds a hook for jurisdiction by referring to the personal data as having been “collected from individuals in British Columbia without their consent”, although it is clear there is no direct collection. He also notes Clearview’s active contemplation of resuming its services in Canada. Alberta’s Commissioner makes a brief reference to jurisdiction, simply stating that “Provincial privacy legislation applies to any private sector organization that collects, uses and discloses information of individuals within that province” (at para 12). The Quebec Commissioner, by contrast, gives a thorough discussion of the jurisdictional issues. In the first place, she notes that some of the images came from public Quebec sources (e.g., newspaper websites). She also observes that nothing indicates that images scraped from Quebec sources have been removed from the database; they therefore continue to be used and disclosed by the company.

Commissioner Poitras cited the Federal Court decision in Lawson for the principle that PIPEDA could apply to a US-based company that collected personal information from Canadian sources – so long as there is a real and substantial connection to Canada. She found a connection to Quebec in the free accounts offered to, and used by, Quebec law enforcement officials. She noted that the RCMP, which operates in Quebec, had also been a paying client of Clearview’s. When Clearview AI was used by clients in Quebec, those clients uploaded photographs to the service in the search for a match. This also constituted a collection of personal information by Clearview AI in Quebec.

Commissioner Poitras found that the location of Clearview’s business and its servers is not a determinative jurisdictional factor for a company that offers its services online around the world, and that collects personal data from the Internet globally. She found that Clearview AI’s database was at the core of its services, and a part of that database was comprised of data from Quebec and about Quebeckers. Clearview had offered its service in Quebec, and its activities had a real impact on the privacy of Quebeckers. Commissioner Poitras noted that millions of images of Quebeckers were appropriated by Clearview without the consent of the individuals in the images; these images were used to build a global biometric facial recognition database. She found that it was particularly important not to create a situation where individuals are denied recourse under quasi-constitutional laws such as data protection laws. These elements in combination, in her view, would suffice to create a real and substantial connections.

Commissioner Poitras did not accept that Clearview’s suspension of Canadian activities changed the situation. She noted that information that had been collected in Quebec remained in the database, which continued to be used by the company. She stated that a company could not appropriate the personal information of a substantial number of Quebeckers, commercialise this information, and then avoid the application of the law by saying they no longer offered services in Quebec.

The jurisdictional questions are both important and thorny. This case is different from cases such as Lawson and Globe24hrs, where the connections with Canada were more straightforward. In Lawson, there was clear evidence that the company offered its services to clients in Canada. It also directly obtained some of its data about Canadians from Canadian sources. In Globe24hrs, there was likewise evidence that Canadians were being charged by the Romanian company to have their personal data removed from the database. In addition, the data came from Canadian court decisions that were scraped from websites located in Canada. In Clearview AI, while some of the scraped data may have been hosted on servers located in Canada, most were scraped from offshore social media platform servers. If Clearview AI stopped offering its services in Canada and stopped scraping data from servers located in Canada, what recourse would Canadians have? The Quebec Commissioner attempts to address this question, but her reasons are based on factual connections that might not be present in the future, or in cases involving other data-scraping respondents. There needs to be a theory of real and substantial connection that specifically addresses the scraping of data from third-party websites, contrary to those websites’ terms of use, and contrary to the legal expectations of the sites’ users that can anchor the jurisdiction of Canadian law, even when the scraper has no other connection to Canada.

Canada is not alone with these jurisdictional issue – Australia’s orders to Clearview AI are currently under appeal, and the jurisdiction of the Australian Commissioner to make such orders will be one of the issues on appeal. A jurisdictional case – one that is convincing not just to privacy commissioners but to the foreign courts that may have to one day determine whether to enforce Canadian decisions – needs to be made.

 

2. Enforcement

At the time the facts of the Clearview AI investigation arose, all four commissioners had limited enforcement powers. The three provincial commissioners could issue orders requiring an organization to change its practices. The federal commissioner has no order-making powers, but can apply to Federal Court to ask that court to issue orders. The relative impotence of the commissioners is illustrated by Clearview’s hubristic response, cited above, that indicates that it had already “gone beyond its obligations”. Clearly, it considers anything that the commissioners had to say on the matter did not amount to an obligation.

The Canadian situation can be contrasted with that in the EU, where commissioners’ orders requiring organizations to change their non-compliant practices are now reinforced by the power to levy significant administrative monetary penalties (AMPs). The same situation exists in the UK. There, the data commissioner has just issued a preliminary enforcement notice and a proposed fine of £17M against Clearview AI. As noted earlier, the enforcement situation is beginning to change in Canada – Quebec’s newly amended legislation permits the levying of substantial AMPs. When some version of Bill C-11 is reintroduced in Parliament in 2022, it will likely also contain the power to levy AMPs. BC and Alberta may eventually follow suit. When this happens, the challenge will be first, to harmonize enforcement approaches across those jurisdictions; and second, to ensure that these penalties can meaningfully be enforced against offshore companies such as Clearview AI.

On the enforcement issue, it is perhaps also worth noting that the orders issued by the three Commissioners in this case are all slightly different. The Quebec Commissioner orders Clearview AI to cease collecting images of Quebeckers without consent, and to cease using these images to create biometric identifiers. It also orders the destruction, within 90 days of receipt of the order, all of the images collected without the consent of Quebeckers, as well as the destruction of the biometric identifiers. Alberta’s Commissioner orders that Clearview cease offering its services to clients in Alberta, cease the collection and use of images and biometrics collected from individuals in Alberta, and delete the same from its databases. BC’s order prohibits the offering of Clearview AI’s services using data collected from British Columbians without their consent to clients in British Columbia. He also orders that Clearview AI use “best efforts” to cease its collection, use and disclosure of images and biometric identifiers of British Columbians without its consent, as well as to use the same “best efforts” to delete images and biometric identifiers collected without consent.

It is to these “best efforts” that I next turn.

 

3. The Illinois Compromise

All three Commissioners make reference to a compromise offered by Clearview AI in the course of ongoing litigation in Illinois under Illinois’ Biometric Information Privacy Act. By referring to “best efforts” in his Order, the BC Commissioner seems to be suggesting that something along these lines would be an acceptable compromise in his jurisdiction.

In its response to the Canadian commissioners, Clearview AI raised the issue that it cannot easily know which photographs in its database are of residents of particular provinces, particularly since these are scraped from the Internet as a whole – and often from social media platforms hosted outside Canada.

Yet Clearview AI has indicated that it has changed some of its business practices to avoid infringing Illinois law. This includes “cancelling all accounts belonging to any entity based in Illinois” (para 12, BC Order). It also includes blocking from any searches all images in the Clearview database that are geolocated in Illinois. In the future, it also offers to create a “geofence” around Illinois. This means that it “will not collect facial vectors from any scraped images that contain metadata associating them with Illinois” (para 12 BC Order). It will also “not collect facial vectors from images stored on servers that are displaying Illinois IP addresses or websites with URLs containing keywords such as “Chicago” or “Illinois”.” Clearview apparently offers to create an “opt-out” mechanism whereby people can ask to have their photos excluded from the database. Finally, it will require its clients to not upload photos of Illinois residents. If such a photo is uploaded, and it contains Illinois-related metadata, no search will be performed.

The central problem with accepting the ‘Illinois compromise’ is that it allows a service built on illegally scraped data to continue operating with only a reduced privacy impact. Ironically, it also requires individuals who wish to benefit from this compromise, to provide more personal data in their online postings. Many people actually suppress geolocation information from their photographs to protect their privacy. Ironically, the ‘Illinois compromise’ can only exclude photos that contain geolocation data. Even with geolocation turned on, it would not exclude the vacation pics of any BC residents taken outside of BC (for example). Further, limiting scraping of images from Illinois-based sites will not prevent the photos of Illinois-based individuals from being included within the database a) if they are already in there, and b) if the images are posted on social media platforms hosted elsewhere.

Clearview AI is a business built upon data collection practices that are illegal in a large number of countries outside the US. The BC Commissioner is clearly of the opinion that a compromise solution is the best that can be hoped for, and he may be right in the circumstances. Yet it is a bitter pill to think that such flouting of privacy laws will ultimately be rewarded, as Clearview gets to keep and commercialize its facial recognition database. Accepting such a compromise could limit the harms of the improper exploitation of personal data, but it does not stop the exploitation of that data in all circumstances. And even this unhappy compromise may be out of reach for Canadians given the rather toothless nature of our current laws – and the jurisdictional challenges discussed earlier.

If anything, this situation cries out for global and harmonized solutions. Notably it requires the US to do much more to bring its wild-west approach to personal data exploitation in line with the approaches of its allies and trading partners. It also will require better cooperation on enforcement across borders. It may also call for social media giants to take more responsibility when it comes to companies that flout their terms and conditions to scrape their sites for personal data. The Clearview AI situation highlights these issues – as well as the dramatic impacts data misuse may have on privacy as personal data continues to be exploited for use in powerful AI technologies.

Published in Privacy

 

The Federal Court has issued its decision in a reference case brought by the Privacy Commissioner of Canada regarding the interpretation of his jurisdiction under the Personal Information Protection and Electronic Documents Act (PIPEDA). The reference relates to a complaint against Google about its search engine, and implicating the so-called ‘right to be forgotten’. Essentially, the complainant in that case seeks an order requiring Google to de-index certain web pages that show up in searches for his name and that contain outdated and inaccurate sensitive information. Google’s response to the complaint was to challenge the jurisdiction of the Commissioner to investigate. It argued that its search engine functions were not a ‘commercial activity’ within the meaning of PIPEDA and that PIPEDA therefore did not apply. It also argued that its search engine was a journalistic or literary function which is excluded from the application of PIPEDA under s. 4(2)(c). The Canadian Broadcasting Corporation (CBC) and the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic (CIPPIC) both intervened.

Associate Chief Justice Gagné ruled that the Commissioner has jurisdiction to deal with the complaint. In this sense, this ruling simply enables the Commissioner to continue with his investigation of the complaint and to issue his Report of Findings – something that could no doubt generate fresh fodder for the courts, since a finding that Google should de-index certain search results would raise interesting freedom of expression issues. Justice Gagné’s decision, however, focuses on whether the Commissioner has jurisdiction to proceed. Her ruling addresses 1) the commercial character of Google’s search engine activity; 2) whether Google’s activities are journalistic in nature; and 3) the relevance of the quasi-constitutional status of PIPEDA. I will consider each of these in turn.

1) The Commercial Character of Google’s Search Engine

Largely for division of powers reasons, PIPEDA applies only to the collection, use or disclosure of personal information in the course of “commercial activity”. Thus, if an organization can demonstrate that it was not engaged in commercial activity, they can escape the application of the law.

Justice Gagné found that Google collected, used and disclosed information in offering its search engine functions. The issue, therefore, was whether it engaged in these practices “in the course of commercial activity”. Justice Gagné noted that Google is one of the most profitable companies in existence, and that most of its profits came from advertising revenues. Although Google receives revenues when a user clicks on an ad that appears in search results, Google argued that not all search results generate ads – this depends on whether other companies have paid to have the particular search terms trigger their ads. In the case of a search for an ordinary user’s name, it is highly unlikely that the search will trigger ads in the results. However, Justice Gagné noted that advertisers can also target ads to individual users of Google’s search engine based on data that Google has collected about that individual from their online activities. According to Justice Gagné, “even if Google provides free services to the content providers and the user of the search engine, it has a flagrant commercial interest in connecting these two players.” (at para 57) She found that search engine users trade their personal data in exchange for the search results that are displayed when they conduct a search. Their data is, in turn, used in Google’s profit-generating activities. She refused to ‘dissect’ Google’s activities into those that are free to users and those that are commercial, stating that the “activities are intertwined, they depend on one another, and they are all necessary components of that business model.” (at para 59) She also noted that “unless it is forced to do so, Google has no commercial interest in de-indexing or de-listing information from its search engine.” (at para 59)

2) Is Google’s Search Engine Function Journalistic in Nature

PIPEDA does not apply to activities that are exclusively for journalistic purposes. This is no doubt to ensure that PIPEDA does not unduly interfere with the freedom of the press. Google argued that its search engine allowed users to find relevant information, and that in providing these services it was engaged in journalistic purposes.

Justice Gagné observed that depending upon the person, a search by name can reveal a broad range of information from multiple and diverse sources. In this way, Google facilitates access to information, but, in her view, it does not perform a journalistic function. She noted: “Google has no control over the content of search results, the search results themselves express no opinion, and Google does not create the content of the search results.” (at para 82) She adopted the test set out in an earlier decision in A.T. v. Globe24hr.com, whereby an activity qualifies as journalism if “its purpose is to (1) inform the community on issues the community values, (2) it involves an element of original production, and (3) it involves a ‘self-conscious discipline calculated to provide an accurate and fair description of facts, opinion and debate at play within a situation.” (at para 83) Applying the test to Google’s activities, she noted that Google did more than just inform a community about matters of interest, and that it did not create or produce content. She observed as well that “there is no effort on the part of Google to determine the fairness or the accuracy of the search results.” (at para 85). She concluded that the search engine functions were not journalistic activity – or that if they were they were not exclusively so. As a result, the journalistic purposes did not exempt Google from the application of PIPEDA.

3) The Relevance of the Quasi-Constitutional Status of PIPEDA

The Supreme Court of Canada has ruled that both public and private sector data protection laws in Canada have quasi-constitutional status. What this means in practical terms is less clear. Certainly it means that they are recognized as laws that protect rights and/or values that are of fundamental importance to a society. For example, in Lavigne, the Supreme Court of Canada stated that the federal Privacy Act served as “a reminder of the extent to which the protection of privacy is necessary to the preservation of a free and democratic society” (at para 25). In United Food and Commercial Workers, the Supreme Court of Canada found that Alberta’s private sector data protection law also had quasi-constitutional status and stated: “The ability of individuals to control their personal information is intimately connected to their individual autonomy, dignity and privacy. These are fundamental values that lie at the heart of a democracy.” (at para 19)

What this means in practical terms is increasingly important as questions are raised about the approach to take to private sector data protection laws in their upcoming reforms. For example, the Privacy Commissioner of Canada has criticized Bill C-11 (a bill to reform PIPEDA) for not adopting a human rights-based approach to privacy – one that is explicitly grounded in human rights values. By contrast, Ontario in its White Paper proposing a possible private sector data protection law for Ontario, indicates that it will adopt a human rights-based approach. One issue at the federal level might be the extent to which the quasi-constitutional nature of a federal data protection law does the work of a human rights-based approach when it comes to shaping interpretation of the statute. The decision in this reference case suggests that the answer is ‘no’. In fact, the Attorney-General of Canada specifically intervened on this point, argue that “[t]he quasi-constitutional nature of PIPEDA does not transform or alter the proper approach to statutory interpretation”. (at para 30). Justice Gagné agreed. The proper approach is set out in this quote from Driedger in Lavigne (at para 25): “the words of an Act are to be read in their entire context and in their grammatical and ordinary sense harmoniously with the scheme of the Act, the object of the Act, and the intention of Parliament.”

In this case, the relevant words of the Act – “commercial activity” and “journalistic purposes” were interpreted by the Court in accordance with ordinary interpretive principles. I do not suggest that these interpretations are wrong or problematic. I do find it interesting, though, that this decision makes it clear that an implicit human rights-based approach is far inferior to making such an approach explicit through actual wording in the legislation. This is a point that may be relevant as we move forward with the PIPEDA reform process.

Next Steps

Google may, of course, appeal this decision to the Federal Court of Appeal. If it does not, the next step will be for the Commissioner to investigate the complaint and to issue its Report of Findings. The Commissioner has no order-making powers under PIPEDA. If an order is required to compel Google to de-index any sites, this will proceed via a hearing de novo in Federal Court. We are still, therefore, a long way from a right to be forgotten in Canada.

Published in Privacy

 

In June 2021, Ontario issued a White Paper that sets out some proposals, including suggested wording, for a new private sector data protection law for the province. This is part of its overall digital and data strategy. Input on the White Paper is sought by August 3, 2021.

I have published table that compares Ontario’s with the federal government’s Bill C-11 (which will not make it through Parliament in the present sitting, and which may get some necessary attention over the summer). It makes sense to compare the proposal to C-11 because, if it passes, any Ontario law would have to be found to be substantially similar to it. The Ontario proposal has clearly been drafted with Bill C-11 in mind. That said, the idea is not to simply copy Bill C-11. The White Paper shows areas where Bill C-11 may be largely copied, but other places where Ontario plans to modify it, add something new, or go in a different direction. Of course, feedback is sought on the contents of the White Paper, and a bill, if and when it is introduced in the Legislature, may look different from what is currently proposed – depending on what feedback the government receives.

I have prepared a Table that compares the Ontario proposal with Bill C-11, with some added commentary. The Table can be found here, with the caveat that the commentary is preliminary – and was generated quite quickly.

Please be sure to respond to the consultation by the August 3 deadline!

Published in Privacy

 

A joint ruling from the federal Privacy Commissioner and his provincial counterparts in Quebec, B.C., and Alberta has found that U.S.-based company Clearview AI breached Canadian data protection laws when it scraped photographs from social media websites to create the database it used to support its facial recognition technology. According to the report, the database contained the biometric data of “a vast number of individuals in Canada, including children.” Investigations of complaints under public sector data protection laws about police use of Clearview AI’s services are still ongoing.

The Commissioners’ findings are unequivocal. The information collected by Clearview AI is sensitive biometric data. Express consent was required for its collection and use, and Clearview AI did not obtain consent. The company’s argument that consent was not required because the information was publicly available was firmly rejected. The Commissioners described Clearview AI’s actions as constituting “the mass identification and surveillance of individuals by a private entity in the course of commercial activity.” (at para 72) In defending itself, Clearview AI put forward arguments that were clearly at odds with Canadian law. They also resisted the jurisdiction of the Canadian Commissioners, notwithstanding the fact that they collected the personal data of Canadians and offered their commercial services to Canadian law enforcement agencies. Clearview AI did not accept the Commissioners’ findings, and “has not committed to following” the recommendations.

At the time of this report, Bill C-11, a bill to reform Canada’s current data protection law, is before Parliament. The goal of this post is to consider what difference Bill C-11 might make to the outcome of complaints like this one should it be passed into law. I consider both the substantive provisions of the bill and its new enforcement regime.

Consent

Like the current Personal Information Protection and Electronic Documents Act (PIPEDA), consent is a core requirement of Bill C-11. To collect, use or disclose personal information, an organization must either obtain valid consent, or its activities must fall into one of the exceptions to consent. In the Clearview AI case, there was no consent, and the disputed PIPEDA exception to the consent requirement was the one for ‘publicly available personal information’. While this exception seems broad on its face, to qualify, the information must fall within the parameters set out in the Regulations Specifying Publicly Available Personal Information. These regulations focus on certain categories of publicly available information – such as registry information (land titles, for example), court registries and decisions, published telephone directory information, and public business information listings. In most cases, the regulations provide that the use of the information must also relate directly to the purposes for which it was made public. The regulations also contain an exception for “personal information that appears in a publication, including a magazine, book or newspaper, in printed or electronic form, that is available to the public, where the individual has provided the information.” The interpretation of this provision was central to Clearview AI’s defense of its practices. It argued that social media postings were “personal information that appears in a publication.” The Commissioners adopted a narrow interpretation consistent with this being an exception in quasi-constitutional legislation. They distinguished between the types of publications mentioned in the exception and uncurated, dynamic social-media sites. The Commissioners noted that unlike newspapers or magazines, individuals retain a degree of control over the content of their social media sites. They also observed that to find that all information on the internet falls within the publicly available information exception “would create an extremely broad exemption that undermines the control users may otherwise maintain over their information at the source.” (at para 65) Finally, the Commissioners observed that the exception applied to information provided by the data subject, but that photographs were scraped by Clearview AI regardless of whether they were posted by the data subject or by someone else.

Would the result be any different under Bill C-11? In section 51, Bill C-11 replicates the “publicly available information exception” for collection, use or disclosure of personal information. Like PIPEDA, it also leaves the definition of this term to regulations. However, Canadians should be aware that there has been considerable pressure to expand the regulations so that personal information shared on social media sites is exempted from the consent requirement. For example, in past hearings into PIPEDA reform, the House of Commons ETHI Committee at one point appeared swayed by industry arguments that PIPEDA should be amended to include websites and social media within this exception. Bill C-11 does not resolve this issue; but if passed, it might well be on the table in the drafting of regulations. If nothing else, the Clearview AI case provides a stark illustration of just how important this issue is to the privacy of Canadians.

However, data scrapers may be able to look elsewhere in Bill C-11 for an exception to consent. Bill C-11 contains new exceptions to consent for “business operations” which I have criticized here. One of these exceptions would almost certainly be relied upon by a company in Clearview AI’s position if the bill were passed. The exceptions allow for the collection and use of personal information without an individual’s knowledge or consent if, among other things, it is for “an activity in the course of which obtaining the individual’s consent would be impracticable because the organization does not have a direct relationship with the individual.” (18(2)(e)). A company that scrapes data from social media sites to create a facial recognition database would find it impracticable to get consent because it has no direct relationship with any of the affected individuals. The exception seems to fit.

That said, s. 18(1) does set some general guardrails. The one that seems relevant in this case is that the exceptions to consent are only available where “a reasonable person would expect such a collection or use for that activity”. Hopefully, collection of images from social media websites to fuel facial recognition technology would not be something that a reasonable person would expect; certainly, the Commissioners would not find it to be so. In addition, section 12 of Bill C-11 requires that information be collected or used “only for purposes that a reasonable person would consider appropriate in the circumstances” (a requirement carried over from PIPEDA, s. 5(3)). In their findings, the Commissioners ruled that the collection and use of images by Clearview AI was for a purpose that a reasonable person would find inappropriate. The same conclusion could be reached under Bill C-11.

There is reason to be cautiously optimistic, then, that Bill C-11 would lead to the same result on a similar set of facts: the conclusion that the wholesale scraping of personal data from social media sites to build a facial recognition database without consent is not permitted. However, the scope of the exception in s. 18(2)(e) is still a matter of concern. The more exceptions that an organization pushing the boundaries feels it can wriggle into, the more likely it will be to engage in a privacy-compromising activities. In addition, there may be a range of different uses for scraped data and “what a reasonable person would expect” is a rather squishy buffer between privacy and wholesale data exploitation.

Enforcement

Bill C-11 is meant to substantially increase enforcement options when it comes to privacy. Strong enforcement is particularly important in cases where organizations are not interested in accepting the guidance of regulators. This is certainly the case with Clearview AI, which expressly rejected the Commissioners’ findings. Would Bill C-11 strengthen the regulator’s hand?

The Report of Findings in this case reflects the growing trend of having the federal and provincial commissioners that oversee private sector data protection laws jointly investigate complaints involving issues that affect individuals across Canada. This cooperation is important as it ensures consistent interpretation of what is meant to be substantially similar legislation across jurisdictions. Nothing in Bill C-11 would prevent the federal Commissioner from continuing to engage in this cross-jurisdictional collaboration – in fact, subsection 116(2) expressly encourages it.

Some will point to the Commissioner’s new order-making powers as another way to strengthen his enforcement hand. The Commissioner can now direct an organization to take measures to comply with the legislation or to cease activities that are in contravention of the legislation (s. 92(2)). This is a good thing. However, these orders are subject to appeal to the new Personal Information Protection and Data Tribunal (the Tribunal). By contrast, orders of the Commissioners of BC and Alberta are final, subject only to judicial review.

In addition, it is not just the orders of the Commissioner that are appealable under C-11, but also his findings. This raises questions about how the new structure under Bill C-11 might affect cooperative inquiries like the one in this case. Conclusions shared with other Commissioners can be appealed by respondents to the Tribunal, which owes no deference to the Commissioner on questions of law. As I and others have already noted, the composition of the Tribunal is somewhat concerning; Bill C-11 would require only a minimum of one member of the tribunal to have expertise in privacy law. While it is true that proceedings before the Federal Court were de novo, and thus the Commissioner was afforded no formal deference in that context either, access to Federal Court was more limited than the wide-open appeals route to the Tribunal. The Bill C-11 structure really seems to shift the authority to interpret and apply the law away from the Commissioner and to the mysterious and not necessarily expert Tribunal.

Bill C-11 also has a much-touted new power to issue substantial fines for breach of the legislation. Interestingly, however, this does not seem to be the kind of case in which a fine would be available. Fines, provided for under s. 93(1) of Bill C-11 are available only with respect to the breach of certain obligations under the statute (these are listed in s. 93(1)). Playing fast and loose with the requirement to obtain consent is not one of them. This is interesting, given the supposedly central place consent plays within the Bill. Further thought might need to be given to the list of ‘fine-able contraventions’.

Overall, then, although C-11 could lead to a very similar result on similar facts, the path to that result may be less certain. It is also not clear that there is anything in the enforcement provisions of the legislation that will add heft to the Commissioner’s findings. In practical terms, the decisions that matter will be those of the Tribunal, and it remains to be seen how well this Tribunal will serve Canadians.

Published in Privacy

 

This post is the third in a series that considers the extent to which the Digital Charter Implementation Act (Bill C-11) by overhauling Canada’s federal private sector data protection law, implements the principles contained in the government’s Digital Charter. This post addresses the fourth principle of the Charter: Transparency, Portability and Interoperability, which provides that “Canadians will have clear and manageable access to their personal data and should be free to share or transfer it without undue burden.”

Europe’s General Data Protection Regulation (GDPR) introduced the concept of data portability (data mobility) as part of an overall data protection framework. The essence of the data portability right in article 20 of the GDPR is:

(1) The data subject shall have the right to receive the personal data concerning him or her, which he or she has provided to a controller, in a structured, commonly used and machine-readable format and have the right to transmit those data to another controller without hindrance from the controller to which the personal data have been provided [...]

In this version, the data flows from one controller to another via the data subject. There is no requirement for data to be in a standard, interoperable format – it need only be in a common, machine-readable format.

Data portability is not a traditional part of data protection; it largely serves consumer protection/competition law interest. Nevertheless, it is linked to data protection through the concept of individual control over personal information. For example, consider an individual who subscribes to a streaming service for audiovisual entertainment. The service provider acquires considerable data about that individual and their viewing preferences over a period of time. If a new company enters the market, they might offer a better price, but the consumer may be put off by the lack of accurate or helpful recommendations or special offers/promotions tailored to their tastes. The difference in the service offered lies in the fact that the incumbent has much more data about the consumer. A data mobility right, in theory, allows an individual to port their data to the new entrant. The more level playing field fosters competition that is in the individual’s interest, and serves the broader public interest by stimulating competition.

The fourth pillar of the Digital Charter clearly recognizes the idea of control that underlies data mobility, suggesting that individuals should be free to share or transfer their data “without undue burden.” Bill C-11 contains a data mobility provision that is meant to implement this pillar of the Charter. However, this provision is considerably different from what is found in the GDPR.

One of the challenges with the GDPR’s data portability right is that not all data will be seamlessly interoperable from one service provider to another. This could greatly limit the usefulness of the data portability right. It could also impose a significant burden on SMEs who might face demands for the production and transfer of data that they are not sufficiently resourced to meet. It might also place individuals’ privacy at greater risk, potentially spreading their data to multiple companies, some of which might be ill-equipped to provide the appropriate privacy protection.

These concerns may explain why Bill C-11 takes a relatively cautious approach to data mobility. Section 72 of the Consumer Privacy Protection Act portion of Bill C-11 provides:

72 Subject to the regulations, on the request of an individual, an organization must as soon as feasible disclose the personal information that it has collected from the individual to an organization designated by the individual, if both organizations are subject to a data mobility framework provided under the regulations. [My emphasis]

It is important to note that in this version of mobility, data flows from one organization to another rather than through the individual, as is the case under the GDPR. The highlighted portion of s. 72 makes it clear that data mobility will not be a universal right. It will be available only where a data mobility framework is in place. Such frameworks will be provided for in regulations. Section 120 of Bill C-11 states:

120 The Governor in Council may make regulations respecting the disclosure of personal information under section 72, including regulations

(a) respecting data mobility frameworks that provide for

(i) safeguards that must be put in place by organizations to enable the secure disclosure of personal information under section 72 and the collection of that information, and

(ii) parameters for the technical means for ensuring interoperability in respect of the disclosure and collection of that information;

(b) specifying organizations that are subject to a data mobility framework; and

(c) providing for exceptions to the requirement to disclose personal information under that section, including exceptions related to the protection of proprietary or confidential commercial information.

The regulations provide for frameworks that will impose security safeguards on participating organizations, and ensure data interoperability. Paragraph 120(b) also suggests that not all organizations within a sector will automatically be entitled to participate in a mobility framework; they may have to qualify by demonstrating that they meet certain security and technical requirements. A final (and interesting) limitation on the mobility framework relates to exceptions to disclosure where information that might otherwise be considered personal information is also proprietary or confidential commercial information. This gets at the distinction between raw and derived data – data collected directly from individuals might be subject to the mobility framework, but profiles or analytics based on that data might not – even if they pertain to the individual.

It is reasonable to expect that open banking (now renamed ‘consumer-directed finance’) will be the first experiment with data mobility. The federal Department of Finance released a report on open banking in January 2020, and has since been engaged in a second round of consultations. Consumer-directed finance is intended to address the burgeoning fintech industry which offers many new and attractive financial management digital services to consumers but which relies on access to consumer financial data. Currently (and alarmingly) this need for data is met by fintechs asking individuals to share account passwords so that they can regularly scrape financial data from multiple sources (accounts, credit cards, etc.) in order to offer their services. A regulated framework for data mobility is seen as much more secure, since safeguards can be built into the system, and participants can be vetted to ensure they meet security and privacy standards. Data interoperability between all participants will also enhance the quality of the services provided.

If financial services is the first area for development of data mobility in Canada, what other areas for data mobility might Canadians expect? The answer is: not many. The kind of scheme contemplated for open banking has already required a considerable investment of time and energy, and it is not yet ready to launch. Of course, financial data is among the most sensitive of personal data; other schemes might be simpler to design and create. But they will still take a great deal of time. One sector where some form of data mobility might eventually be contemplated is telecommunications. (Note that Australia’s comparable “consumer data right” is being unrolled first with open banking and will be followed by initiatives in the telecommunications and energy sectors).

Data mobility in the CPPA will also be limited by its stringency. It is no accident that banking and telecommunications fall within federal jurisdiction. The regulations contemplated by s. 120 go beyond simple data protection and impact how companies do business. The federal government will face serious challenges if it attempts to create data mobility frameworks within sectors or industries under provincial jurisdiction. Leadership on this front will have to come from the provinces. Those with their own private sector data protection laws could choose to address data mobility on their own terms. Quebec has already done this in Bill 64, which would amend its private sector data protection law to provide:

112 [. . .] Unless doing so raises serious practical difficulties, computerized personal information collected from the applicant must, at his request, be communicated to him in a structured, commonly used technological format. The information must also be communicated, at the applicant’s request, to any person or body authorized by law to collect such information.

It remains to be seen what Alberta and British Columbia might decide to do – along with Ontario, if in fact it decides to proceed with its own private sector data protection law. As a result, while there might be a couple of important experiments with data mobility under the CPPA, the data mobility right within that framework is likely to remain relatively constrained.

Published in Privacy
<< Start < Prev 1 2 3 4 5 6 7 Next > End >>
Page 2 of 7

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law