The use of facial recognition technology has been in the spotlight recently, following news that some Australian retailers have been using facial recognition technology to capture the biometric data of customers in their stores. The news has re-sparked a debate about whether Australia’s existing laws are adequate to regulate facial recognition technologies and AI systems.
Privacy Act and the collection of biometric data
One of the few existing regulations in Australia surrounding the use of AI systems is the Privacy Act 1988 (Cth) (Privacy Act), in circumstances where the AI system uses personal information. In addition, and relevant to facial recognition systems, the Privacy Act requires regulated entities to meet a higher standard of conduct for the collection of biometric data that is to be used for the purpose of automated biometric verification or biometric identification, which is considered to be ‘sensitive information’ under the Act. Entities regulated by the Privacy Act cannot collect such biometric information from individuals unless:
- the individual has consented to the collection of the information (consent can be implied, but the Office of the Australian Information Commissioner (OAIC) expects that consent be informed, voluntary, current and specific and be given by a person who has capacity); and
- the information is reasonably necessary for one or more of the entity’s functions or activities, (or other specific exceptions apply).
Does Australia need law reform?
To date, Australia has adopted a governance or principles-based approach to regulating AI. That is, while there are specific voluntary AI frameworks, such as the AI Ethics Principles, we do not have legislation which has been specifically drafted to regulate the use of AI technology. Instead we have taken the approach that existing legal regimes that are technology neutral and principles based, for example those which govern discrimination or the use of personal information and surveillance (among others), will impact and govern the use of AI. These existing regimes apply the principle that AI is merely a new process or technology that needs to comply within that existing framework.
However, many have criticised that the existing legal regimes themselves are inadequate. The contrary view is that technology neutral, principles-based legislation is correct, and that regulators already have many of the tools in their kit bag, and it is simply a matter of regulators more pro-actively using them. The OAIC taking regulatory action in respect of the collection of any biometric information by facial recognition systems on the basis of assertions of deficient consent, is an example of this.
The timing of this news coincides with fines handed down in other jurisdictions in relation to facial recognition practices. This month Clearview AI was fined 20 million euros by Greek data protection authorities for applying biometric monitoring techniques via facial recognition technology to individuals in breach of the GDPR, while in May the UK data protection authority imposed a 7.5 million pound fine (Clearview AI is appealing the fine), and in March the Italian data protection authority imposed a 20 million euro fine. In Australia, while the OAIC found in November 2021 that Clearview AI’s practices breached the Privacy Act and ordered Clearview AI to cease collecting facial images and biometric templates for individuals in Australia, no regulatory fine has yet been imposed. If imposed, the maximum fines under the Privacy Act are currently substantially lower than those allowable in other jurisdictions for similar practices, being a maximum of AUD$2.22 million.
Notwithstanding that, the calls to strengthen the approach to AI regulation in Australia continue. For example, as part of reforms to the Privacy Act that have been put forward there are proposals that consent to the collection and use of personal information be unambiguously indicated (that is, not implied),1 in line with requirements in other jurisdictions such as under the GDPR, and that the use of biometric or genetic data, including through the use of facial recognition software, be a restricted practice.2 There have also been proposals to substantially increase the penalties for breaches of the Privacy Act. Further, organisations such as the Australian Human Rights Commission have recommended the introduction of AI specific legislation and a moratorium on the use of AI facial recognition technology until legislation is produced.3 We have written about Australia’s approach to the regulation of AI, with a particular focus on the application of AI systems to the legal industry, in the International Bar Association’s publication on this topic.
Meanwhile other jurisdictions around the world are seeking to introduce specific laws to regulate AI systems, with Canada becoming the latest jurisdiction to propose legislation that will regulate the design, development and use of AI systems. Canada’s proposed new laws follow on from the EU’s introduction of its proposed Draft AI Act in 2021 (EU Draft AI Act). While the EU Draft AI Act proposes to prohibit the use of ‘real-time’ biometric identification systems other than in certain use cases by law enforcement, no similar outright prohibition exists under Canada’s proposed new laws. Rather, similarly to the EU Draft AI Act, the Canadian laws propose a risk-based approach to regulating the design, development and use of AI. The new Canadian laws are significant and broad reaching, particularly when compared to Australia’s own soft-law and principles based approach and, similarly to the EU Draft AI Act, will apply to Australian designers, developers and suppliers where their AI systems are used in those jurisdictions.
The extra-territorial approach of these proposed acts highlights the importance of considering a global approach to the regulation of AI, while also reiterating that Australia cannot afford to be left behind if we want to support and encourage the safe development and adoption of AI in Australia.
A risk-based approach to AI regulation is not only sensible, but required, if we are to strike the right balance between fostering innovation and protecting against harmful consequences. However, regulation is only part of the puzzle, as many of the harms associated with AI are inadvertent and unintentional.
New regulations don’t cure this.
So it is equally important that the government, industry and regulators alike work together to develop practical frameworks and initiatives to enable the economy to adopt and use AI in a safe and responsible way. There is a very large education and capability component here that all of us involved in the AI sector need to lean into so we can collectively harness the opportunities provided by AI.
Authors: Simon Burns, Jen Bradley, Sophie Bogard, and Amelia Harvey
1. Attorney General’s Department, Privacy Act Review: Discussion Paper (October 2021), proposal 9.1.