27/08/2020
  • Data privacy statutes around the world are no longer fit for purpose.
  • Adoption of recommendations of the ACCC’s Digital Platforms Inquiry and other current proposals for revision of the Privacy Act 1988 will not fix this problem: the proposals do not envisage moving decisively away from notice and choice as the foundation for data privacy regulation.
  • The often misdescribed ‘gold standard’ of GDPR is not the solution.
  • We need to go back to basics, and ask ‘what harms should privacy law address’, or as Prof Julie Cohen put it, ‘what is privacy for’? We then need to redraft our statutes to (at least) protect the right and interests of individual humans to go about their lives without excessive intrusion upon reasonable expectations of seclusion.

Almost all data privacy statutes around the world are no longer reasonably fit for purpose.

The long list of unfit statutes includes:

  • the Privacy Act 1988 (C’th) and its State and Territory counterparts,
  • the Workplace Surveillance Act 2005 (NSW) and the Workplace Privacy Act 2011 (ACT),
  • the Surveillance Devices Act 2004 (Cth), and the Surveillance Devices Act 2007 (NSW) and all other State and Territory counterparts dealing with surveillance devices and listening devices.

Twenty years into the 21st century, the design specification for 21st century data privacy laws is finally becoming clear.

The problem that needs to be addressed can be simply stated: data privacy statutes are intended to protect human dignity, but instead focus on data, not humans. Humans have an interest (and should have a legal right) in and to reasonable seclusion. Data is completely indifferent as to whether it relates to humans or machines. Data has no concept of human frailties and needs. So humans need to make decisions about how and when data about humans is collected and used. Humans within entities that collect and use data about other humans should have a concept of frailties and reasonable expectations of affected humans - I avoid use of the term ‘data subject’, which term is not appropriately respectful of humans.

However, views as to what is a reasonable intrusion into seclusion vary widely, culture by culture, and often within cultures. And some entities, and humans making decisions within those entities, simply do not care, or allow business incentives or self-interest to overwhelm fundamental decency.

For those entities and humans, we need:

  • transparency (daylight is the best antiseptic),
  • accountability of each data handling entity and specific decision -makers within each entity,
  • appropriate incentives and sanctions, and
  • demonstrated enforcement and consequences.

If incentives for humans within entities and the entities are not properly aligned, or humans within entities are not individually accountable, we should expect many entities to be undisciplined and unprincipled. Not necessarily or deliberately bad, but undisciplined, leading to bad, sometimes quite unacceptable, outcomes. Australians need look no further than the last five years of reports of Royal Commissions – indigenous youth in custody, institutional abuse of minors, banks, nursing homes. With power of entities to visit poor outcomes upon vulnerable people, comes responsibilities of greater transparency, responsibility and accountability.

Most data privacy laws are intended to empower individuals by informing them how data about them may be being collected and used, and thereby enable them to exercise a choice. This foundation of 20th century data privacy regulation was (and remains) variously called ‘notice and consent’, ‘notice and choice’, ‘individual choice’ or ‘privacy self-management’.

The mechanism to give effect to this foundational theory is a requirement that each regulated entity:

  • makes available a privacy policy that explains generally how the entity deals with personal data,
  • provides to an affected individual a more specific and targeted privacy notice at or near the point or time of collection of particular categories of personal data, and
  • seeks consent in relation to collection and uses of certain narrower categories of more ‘sensitive’ personal data.

Critiques of this mechanism focus upon the ‘illusion of consent’, as described by Profs Paul Ohm, Fred Cate and other privacy scholars, or the more recent restatement (by Prof Dan Solove and others) of this illusion as ‘the privacy self-management problem’.

In brief, these criticisms revolve around the problem of expecting affected individuals to properly understand and make a choice about whether to accept an act or practice which affects the individual’s privacy, and particularly when there is often no practical ability for each of us to say no, or even no to that, but it might be OK if you did it this way other way…[insert here].

Critiques of ‘notice and choice’ generally suggest that this framework needs to be supplemented, or replaced, by an additional requirement of demonstrated organisational accountability of the entity that is collecting, handling or disclosing personal information about the affected individual, or instituting surveillance of a human (whether identifiable or not).

Many data privacy statutes are deficient in bridging the chasm between ensuring:

  • that there is a fair description created and provided to an affected individual about the purpose and extent of a proposed data collection, use or disclosure or surveillance activity, and
  • that this data collection, use or disclosure or surveillance activity is necessary and proportionate to achieve a reasonable outcome, with reasonableness judged by consideration of:
  • the degree of risk and extent of impact upon legitimate expectations of privacy,
  • whether an individual suffers a harm that arises from this act or practice, and
  • taking into account societal interests (such as in health and safety of other individuals) and the interests of the regulated entity that wants to collect, use or disclose data in a disclosed and properly risk evaluated way.

Does this sound difficult? It really should not. The much lauded GDPR of the European Union broadly requires just this. Business in Europe has not ground to a halt. Post-Brexit free trade Britain has not shown any interest in ditching or loosening up its inheritance of GDPR requirements. Now North America is moving towards enactment of similar laws.

Of course, Australia is different. The situation is worse.

Australian data privacy and surveillance statutes generally do not tie particular statutory requirements back to any stated right of privacy, standard of reasonableness or fairness, or any test as to the necessity or proportionality of a relevant, privacy affecting, act or practice.

Let us go to what should be the basics.

Human dignity requires us to be able to go about our private lives without unreasonable or unknown intrusions into what we do, why we do it, with whom and where. This includes a right to go about in public (including online) without unreasonable intrusions upon our ability to be our private selves in public (including online). Not an absolute right, and not a right to prevail over other rights (such as a right to health, safety and security), but a basic right.

The operation of this right becomes contentious when it bumps up against other rights and interests. Privacy is easy to define, and often overlooked because it is not readily defined, but this does not make it any less important to legally recognise and protect data privacy. As the NSW Law Reform Commission stated over a decade ago when recommending a new statutory cause of private action for serious invasion of privacy, “[t]o suggest that it is impossible to protect privacy generally in the manner proposed in our Bill because the concept cannot be precisely defined is to succumb to what Lord Reid once described as “the perennial fallacy that because something cannot be cut and dried or lightly weighed or measured therefore it does not exist”. That Commission’s recommendation (for a new statutory cause of action for serious invasion of privacy) disappeared without trace, like similar recommendations from all other law reform bodies that have looked at that question since then.

The perennial appearance of a call for a new cause of private action for serious invasion of privacy is due to the manifest need for control over (and self-help empowerment of citizens in relation to) pervasive civil surveillance and manifestly excessive collections of personal information.

The corresponding disappearance has been largely due to a combination of disinterest of parliamentarians and sustained focus of a few powerful media interests whose news outlets mischievously present a private right of action for privacy invasion as an existential threat to freedom of journalism, while simultaneously (but elsewhere) asserting that global digital platforms unfairly profit from privileged access to information about attributes and preferences of individuals.

It is time to confront the shibboleths shrouding a private action for serious invasion of privacy. We still cannot all agree as to how to define democracy after over two thousand years, but we share a broad consensus as to what it is not, and that consensus enables us to entrench democracy in law and make democracy more or less work. Even our literalist High Court of Australia found an implied right of political communication (well) hidden under the text of our Australian Constitution. If we do not start taking data privacy seriously as a human right, the slippery slope to a dystopian surveillance state is largely unimpeded by red flags and checkpoints. And after all, the proposals for a private right of action for privacy invasion are framed as only responding to invasions that are serious, deliberate or reckless, and that cannot be justified in the public interest. These proposals cut no free lunch for plaintiffs’ lawyers.

In many advanced democracies, human dignity is embedded in the law as an enforceable human right. That is not the case across Australia. No Australian Parliament has enacted a baseline human rights statute against which privacy impacting acts and practices of Australian entities must be considered. Some Australian States and Territories have charters of human rights which reference privacy as a right that is relevant in consideration and interpretation of rights-affecting statutes. However, relevant Australian data privacy statutes generally do not create sufficient scope for such ‘charter rights’ to be likely to significantly affect a court’s interpretation of the relevant provisions in the data privacy statute.

New readers of the Privacy Act 1988 are often surprised that this statute does not define “privacy” or the circumstances in which an act or practice is to be taken to cause privacy harm to an individual. The Overview in Schedule 1 - Australian Privacy Principles states that Part 1 of the APPs (APP 1 and AAP 2) “sets out principles that require APP entities to consider the privacy of personal information, including ensuring that APP entities manage personal information in an open and transparent way”. However, the APPs do not state how APP entities should determine the circumstances in which rights or interests of individuals in and to privacy are affected, or how to evaluated the nature or extent of harm to those rights or interests for the purpose of application of the APPs. Most operative provisions in the Privacy Act 1988 use privacy as an adjective (occasionally an adverb) in a description of something else: privacy policy, Privacy Act, Australian Privacy Principle, privacy authorities and so on. A reader can carefully read all 330 pages of the Privacy Act 1988 and still not know:

  • what privacy is,
  • the circumstances in which an act or practice is to be taken to cause privacy harm to an individual, or
  • when a prospective privacy harm should be considered a serious harm and subjected to a careful privacy impact assessment.

Now, you may object to my reading of the Privacy Act 1988 and direct me to section 2A, which sets out the objects of the Act as including:

  • “to promote the protection of the privacy of individuals”,
  • “to promote responsible and transparent handling of personal information”, and
  • “to implement Australia’s international obligation in relation to privacy”.

I endorse grand and pious objectives, but (doffin’ m’cap to Norman Lindsay) proof of puddin’ is ina eatin’. On a plain reading of the APPs, it is reasonable to ask whether section 2A has any relevance at all in interpretation of the APPs. A court applying Australian legal principles of statutory interception to interpret the APPs might not look at section 2A at all.

I suggest that our approach to data privacy laws needs a reboot. ‘

We need to go back to: what is privacy for? Whether protected as a human right or not, I suggest that most Australian citizens who actually stop to think about such matters would contend that a right of reasonable seclusion for each human is as fundamental an aspect of the rule of law in our advanced democracy as is the principle of equal treatment before the law. Of course, often that right of human seclusion gives way to other personal and societal rights, such as a right of protection of life and limb of an affected human and of others in our society. However, protection of life and limb of an affected human and others in our society should prevail only to the extent that intrusion of this right upon our right of seclusion is reasonably justified as proportionate and necessary. To take an obvious example: facial recognition cameras recognising everyone on every street might solve street crime and reduce terrorism. However, most Australian citizens would consider universal surveillance, at least when coupled with automated facial recognition, as a patently unreasonable intrusion upon our reasonable expectation of seclusion.  

Data privacy statutes should be designed to be a sensible tool to protect human expectations of seclusion, whether that expectation is recognised as a human right or not. Data about what we do, why we do it, with whom and where, is increasingly captured, made useful, corelated, compared and shared, by technology. Data is therefore an increasingly powerful means to capture and tell others, including global businesses and governments, all of those things. Pervasive collection of data about what we do online and offline (in the physical world) increases capabilities of businesses and governments to unreasonably intrude into our private lives, often without our knowledge. By enthusiastically adopting smartphones, ‘personal wellness’ devices and other Internet of Things (IoT) devices, we allow powerful technologies into our intimate lives. These technologies open the flow of data about us and bring us daily benefits including personalisation, carrying around less stuff and ‘ask me once’ - as well as the much less visible risk of being singled out to our detriment.

Data privacy regulation is the principal tool that we use to limit less visible intrusions into our right of seclusion and being singled out to our detriment. Sensibly enough, 20th century data privacy regulation focusses upon creating transparency, so that we know (if we chose to know) what was going on with data about us. The now outdated regulatory theory is that that we can control this flow of data by turning off the data flow at its source, being the relevant human ‘data subject’, by each of us exercising our personal choice as to whether to deal with an entity that didn’t agree with our individual view of our reasonable expectation of seclusion.

20th century data privacy statutes are non-judgemental as to our diverse individual views of our individual expectations of seclusion. The statutes generally did not care if we are unduly sensitive, or just plain silly, about our expectation to reasonable seclusion. By requiring regulated entities to publish clear and reasonably prominent notice as to the flows of personal data about us, including notice as to flows in which we are not actively the direct source, we are given a choice to deal, or not to deal, with each entity seeking our data.

For example, if we know about flows of our credit card data between banks and direct marketers who might like to know what we buy and where, we have a choice as to whether to deal, or not to deal, with the bank, the credit card issuer and the direct marketer – or to call them out for public opprobrium for not being respectful of our expectations to reasonable seclusion (labelled as a right or interest in and to data privacy). And for limited categories of personal data that most people would regard as more sensitive, the data privacy statute requires that we be each provided with an ability to consent, or to withhold consent. But regardless of whether we are required to be presented with an ‘I agree’ option or not, the regulatory theory is that we should always know, through provision or prior notice and therefore after being fully informed (if we choose to read the notice and so to be informed), when and how personal information about us is captured, uses and shared by any regulated entity, whether we deal directly with them or not.

Of course, 20th century data privacy regulation was and is not only about notice and choice: data privacy statutes also have rules that confine what a regulated entity may elect to do in collecting, handling and sharing personal data. The Australian Privacy Principles in the Privacy Act 1988 (C’th) are reasonably typical of such rules in various statutes around the world. Accordingly, the ways in which regulated entities may give effect to notice and choice are constrained by rules. However, those rules tend to be quite permissive, so regulated entities can build in substantial latitude for themselves through giving notice to affected individuals as to what those entities may elect to do with personal data.

It may seem odd that now, in mid-2020, I am discussing 20th century data privacy regulation. 20th century data privacy regulation started to seriously fray around the year 2000, upon widespread adoption of the internet. 20th century data privacy regulation then came under strain, but it only ceased to be reasonably fit for purpose upon broad take-up of smartphones, social media and other digital platform services.

Notwithstanding these developments, many countries continue to adopt variants of 20th century data privacy statutes, with inherent defects of the notice and choice framework papered over or circumvented in various ways. The incorrectly characterised ‘gold standard’ General Data Protection Regulation of the European Union adds on sensible innovations such as a ‘legitimate interest’ ground for processing (which reduce the ‘of course, you didn’t need to spell that out, now, where’s the beef?’ element of lengthy privacy disclosures), but still is a 20th century data privacy statute. The Californian Consumer Protection Act seeks to significantly limit abilities of organisations to sell personal data, to make it much easier for individuals to opt-out of uses of personal data about them, and to protect data about households as well as persons. However, it is also built upon the shaky foundation of the notice and choice framework. Indeed, almost all recent data privacy statutes double-down on the level of information that must be provided to individuals to facilitate individual choice, as though this will fix (and not exacerbate) the problem, notwithstanding the body of evidence accumulating since about the year 2000 that notice and choice is failing.

Notwithstanding acknowledgements of this failure, some current regulatory proposals, such as the recommendations of the Australian Competition and Consumer Commission in its 2019 Digital Platforms Inquiry Report, propose bolt on unfairness as a regulatory concept. Such proposals apply consumer protection reasoning of whether disclosures through notices are ‘fair’ and not misleading. But a consumer protection lens is too narrow. What should I need to be a consumer of a product or service to be protected? Isn’t it enough if there is data about me that is used to effect an outcome on me that is differentiated from outcomes for others and particularly harmful to me? What is unreasonable about my expectation not to be so harmed through use of data about me to effect an individuated harm upon me (regardless of whether I am identifiable or not)? Where is the legal requirement for serious consideration of the scope and other fundamental attributes of a human right of seclusion, or the nature and degree of harms caused by digital data and algorithm enabled intrusions upon a human right of seclusion? Adopting a consumer protection lens, as the ACCC advocates, will lead to more pain for both regulated entities and affected individuals. But over-emphasis upon the extent and quality of notice and disclosure that must be provided to individuals to facilitate their individual choice exacerbates the problems inherent in the notice and choice framework, rather than aiding in design of solutions to alleviate them.

The notice and choice framework is failing under the increasing combined weight of:

  • Perceptions by users that they must use some services (particularly search and social network platform services, but potentially also including many other services, such as job listing platforms), and therefore that they do not really have a choice as to whether they use a service or not.
  • Service providers not offering a range of data choices to individuals, and/or gaming how and where choices are provided, and/or not giving effect to data minimisation. Only some service providers today offer each individual a range of privacy settings that are initially set to highly protective, or specific, plain English, explanations as to either the choices that the individual has or as to why the individual might wish to ‘shift the slider’ to less protective privacy settings. Data minimisation is today not mandated.
  • Smartphones are, and many other IoT devices are becoming, an extension of our bodies and an open window into our everyday lives. Valuable data that is increasingly being used is the ‘data exhaust’ of individuals that has been created in the course of their use of other products or services. We are too busy navigating around obstacles ahead to look back and analyse our data exhaust. Individuals cannot be expected to fully comprehend, let alone evaluate, collections, uses and secondary disclosures and further uses of data exhaust.
  • The volume of information that users must read and evaluate in order to exercise all the ‘choices’ that they are ‘requested’ to make is simply unmanageable for even the most diligent and time rich humans.
  • The complexity of ways in which information is collected and shared overwhelms many individuals, even if they are willing to engage in trying to understand what is going on.
  • Many data custodians failing to take appropriate steps to protect personal information from misuse or overuse by other participants in multi-party data ecosystems that can use the data about individuals in ways that they should not.
  • The fact that it is now possible to treat each individual differently without knowing who that individual is, by using pseudonymised individual level data, algorithmic inference and attribute matching.

Data privacy and consumer protection regulators are responding to shortcomings of the privacy self-management framework by advocating new measures of ‘organisational responsibility’ or ‘organisational accountability’.

This focus goes beyond data privacy and information security by design and default, to include new expectations that data custodians themselves address (assess, avoid and mitigate residual risks) of unfair or unreasonable impacts upon individuals. 

These impacts upon individuals, often now the outcomes of use of automated inference engines (data analytics enabled by AI/ML), are enabled by good, or bad, use by humans of data inputs. 

Adverse impacts in particular cases include:

  • uses and sharing of personal data about individuals in ways that those individuals cannot reasonably expect, or which are an affront to reasonable expectations of human dignity, and
  • such significant harm to an individual that particular measures to assess, measure, mitigate should be mandatory, or if this is not reasonably practicable, the relevant act or practice (use of outputs to effect a particular outcome) should be per se prohibited (i.e. a ‘no-go zone’).

An example of such a ‘no-go zone’ is proposed clause 16(5) of the Personal Data Protection Bill 2019 of India, which is currently before the Lok Sabha:

“The guardian data fiduciary shall be barred from profiling, tracking or behaviourally monitoring of, or targeted advertising directed at, children and undertaking any other processing of personal data that can cause significant harm to the child.”

In summary, since at least the year 2000 the notice and choice foundation has become progressively less fit for purpose for 21st century data protection laws.

The now relevant question is how to address this problem.  

What should be the design specification for 21st century data privacy laws?

Features might include:

  • A data controlling entity must undertake an impact assessment which looks at unfairness and unreasonableness of impacts upon individuals, and harms to individuals and society, beyond compliance with privacy principles.
  • A data controlling entity must take active steps to monitor and control how other entities in a data ecosystem use data.
  • No-go zones - a data controlling entity must not do certain specified things at all, and other specified things unless there is a high level of transparency through enhanced notice and then specific, express, unambiguous consent of the individual to that particular act or practice.
  • A close focus upon risk assessment, mitigation, and management of residual risk, including a requirement for good and reliable project risk management, review and monitoring of processes and practice and mechanisms for determining the fitness for purpose and use of data outputs that are used to effect individuated outcomes upon individuals.

Consideration and mitigation of a broader range of risks and harms to individuals requires responsible entities to move beyond privacy impact assessments (PIAs) as today understood. This move needs to occur now, at a time when the processes and practices to conduct the much narrower PIAs as today conducted are still becoming broadly understood.

Broader principles of fairness, equity, accountability and transparency in uses and applications of data about individuals – and not just personal data about these individuals – must become an essential feature of new, 21st century, data privacy laws.

These principles will not be consistently and reliably translated into practice without a really close focus upon good practice: what good practice looks like, how it is given effect and transparency, what methodologies and tools will help entities understand and address the broad range of risks that need to be addressed, and how to balance incentives for good behaviour and sanctions for unacceptable behaviour.

 
""