18/08/2020

Last week we summarised a paper from the EU Parliament arguing that AI and the EU’s privacy law (the GDPR) could be reconciled – albeit with a big dose of flexibility. This week, Peter Leonard argues that despite 20 years of widespread adoption of the internet, our existing privacy laws are congenitally of keeping pace with the rapidly evolving and increasingly sophisticated data use, collection and processing.

‘You know it when you see it’ approach is not good enough

Peter argues that a big part of the problem is that there is no clearly defined sense of what constitutes a privacy harm to individuals. Privacy laws also have the basic equation wrong: data privacy statutes are intended to protect human dignity, but instead focus on data, not humans:

“Data is completely indifferent as to whether it relates to humans or machines. Data has no concept of human frailties and needs. So humans need to make decisions about how and when data about humans is collected and used. Humans within entities that collect and use data about other humans should have a concept of frailties and reasonable expectations of affected humans.”

The cornerstone of privacy protection should be a right of reasonable seclusion, and that intrusions upon this right need to be reasonably justified as proportionate and necessary. For example, we accept facial recognition when it helps detect street crime but most Australians would consider widespread automated facial recognition as an unreasonable intrusion and unjustified surveillance.

Where are we today?

Peter argues that our data privacy statutes are based on an outdated 20th century ‘privacy self-management model’: data controllers provide notice to individuals about the use and processing of their information, and individuals choose to opt-in or opt-out.

This model no longer works in our data driven technology world because:

  • Sophisticated algorithms are able to derive and create personalised profiles using pseudonymised data, inference and data training without individuals necessarily being aware that such processing is occurring;
  • Privacy regulation is predicated on the ‘illusion of choice’ - some consumers believe they must use search and social network platforms so do not have a real ‘choice’ to opt-in or opt-out;
  • Many service providers offer a limited range of privacy settings;
  • Smartphones and other devices are becoming further ingrained in our daily lives and people rarely set aside time to analyse their data exhaust and how their data is used, collected and processed; and
  • There is too much information for consumers to digest so they skip over privacy disclosures instead of reading and fully understanding them to make an informed choice. Many data statutes propose offering more information to individuals to help with their choice but this likely exacerbates the information overload problem.

What are other countries doing?

More recent privacy legislation around the world have shifted from placing responsibility on users to make choices to placing responsibility and accountability on organisations. Companies can no longer satisfy regulators by simply adopting information security by design and default; they need to go further and assess and mitigate unreasonable risk and intrusions upon consumers. For example, the GDPR adds a “legitimate interest” ground for data processing and the Californian Consumer Data Protection Act limits the ability of companies to sell personal data.

Where do we need to go?

The ACCC Digital Platforms Inquiry Report proposed introducing unfairness as a standard: i.e. whether disclosures through notices are fair and not misleading. But this views privacy from a narrow consumer protection lens, and just swaps one ill-defined standard for another.

Peter says a more thorough-going rethink is needed to modernise our privacy laws for the 21st century, including:

  • Creating “no-go zones” (per se prohibitions) e.g. India’s Personal Data Protection Bill 2019 prohibits a guardian data fiduciary from tracking, profiling or behaviourally monitoring children, targeting advertising at children, and prohibits them from processing personal data causing significant harm to children;
  • Requiring data controllers to undertake an impact assessment analysing unfair and unreasonable impacts on individuals;
  • Requiring organisations to focus on risk assessment, mitigation and management to identify whether data collected and processed is fit for purpose and may cause harm towards individuals.

Even so, there’s no neat definition of privacy which will prove stable as technology continues to rapidly accelerate. Ending on an optimistic note, Peter points out that democracy is not a universally defined concept but we broadly agree on what democracy is not. Similarly, while we may not all agree on what constitutes privacy and privacy harms, we can surely find broad consensus on what data controllers should not do.

 

Read more: Data Privacy in a Data and Algorithm Enabled World

""