04/10/2019

CGI used to be reserved for the likes of Hollywood. It required intensive computing power, expensive software, highly trained experts and deep pockets. Yet thanks to the ever-advancing development of AI technology, now everyone can be a VFX artist, for better or for worse.

This has given rise to ‘deepfakes’: computer generated video (or audio) in which people appear to be doing things that they never did.  The technology, which uses artificial neural networks to detect and adapt patterns in facial data, can generate content from as little as one training image. Think your favourite ‘face-swap’ app, on steroids.

The scary part? The result is often so convincing that it is indiscernible to the human eye, and with the software becoming increasingly accessible, anyone with a laptop and an internet connection can do it, in almost real-time.

So what?

In the age of ‘fake news’, the implications of deepfakes on democratic norms and the public interest is not difficult to conceive. Presented with a world in which even seeing is no longer believing, many rightly fear a further erosion of trust in both the media and our elected representatives.

Last year comedian Jordan Peele appeared in this deepfake of President Barack Obama in what starts as a convincing presidential address, and ends as a PSA on the imperatives of staying vigilant in the era of information warfare.

The deepfake phenomenon also adds an interesting dimension to the debate around potential ‘Truth in Political Advertising’ legislation in Australia, a recurring theme in public debate in recent years where political advertising often preferences a negative campaign over a positive.

So what can our current laws do to protect us from the dark side of deepfakes?

Defamation

Whilst deepfakes present an entirely new way for reputations to be damaged, there is no reason why our current defamation laws would not respond to a deepfake that has damaged a person’s reputation.  The test for defamation broadly consists of:

  1. Publication – the material must be published to at least one third party;
  2. Identification – the material must identify (directly or indirectly) the allegedly defamed person; and
  3. Defamation – the material must be defamatory to the ordinary, reasonable person.

Interestingly, the nature of deepfakes means that the well-known “truth” defence to defamation will almost always be unavailable. 

Additionally, some material is inherently defamatory in nature and showing a person in a particular light can expose a publisher / creator to liability. For example, in the pre-internet world, the publication of a photo of a well-known rugby league player, in the nude, was held to be defamatory (see Ettingshausen v Australian Consolidated Press Ltd (1991) 23 NSWLR 443). In this kind of context, the more “real” that the deepfake appears to be, the greater likelihood that the materials may be defamatory. 

Copyright

On the flipside of defamation are the protections afforded to creative works being altered (as opposed to the individual’s likeness which is being inserted). Where someone alters footage which is an ‘original work’ under the Copyright Act 1968 (Cth), then to the extent any “substantial” part of the work is reproduced, this would be considered infringement.

A prime example is the recent Chinese deepfake app ‘Zao’ which exploded with viral popularity earlier this year. The app, meant for its novelty value, will graft your face directly onto an actor’s in a famous movie scene, Leonardo DiCaprio’s in Titanic, for example.

The notable difference however is who can actually seek recourse. A defamation action may be brought by the individual the subject of the deepfake material (assuming the material is defamatory), however any copyright infringement case would need to be brought by the owner of the creative work.

Fraud

Someone impersonating someone they’re not? Deepfakes are also possibly caught by the fraud provisions in the Crimes Act 1900 (Cth), where a person “by any deception, dishonestly obtains property belonging to another, or obtains any financial advantage or causes any financial disadvantage.” Admittedly the test is fairly narrow. It only applies to deception resulting in financial loss and wouldn’t apply more broadly to people falling victim to deepfakes created simply to shame and embarrass.

That being said, we are already seeing applications of the technology that would be caught by existing laws. The Wall Street Journal reported earlier this year on AI based software which was used to impersonate the voice of a company’s CEO to coax a €220,000 fraudulent money transfer.

Privacy

Unsurprisingly, the overwhelming popularity and accessibility of a service such as Zao has raised a number of privacy concerns, where the process inherently involves uploading a person’s image and mapping it for biometric data.

In Australia, this information (where “reasonably identifiable”) would be considered ‘sensitive information’ under the Privacy Act 1988 (Cth) and would subject the party collecting it to a number of stringent conditions, including gaining the individuals consent.

This barrier of consent may theoretically protect an individual from the indiscriminate use of their image, but in a global context where anyone can upload an image of anyone, to a company based offshore and not subject to the Privacy Act 1998 (Cth), this likely does little in practice.

Misleading and Deceptive Conduct

And last but certainly not least is the Australian Consumer Law (ACL), as set out in Schedule 2 of the Competition and Consumer Act 2010 (the Act). Whilst in Australia we do not have a ‘right to publicity’ or other ‘image right’ that allows one to control the use of their image, the ACL does specifically prohibit a person, in trade or commerce, making a:

  • “misleading representation that purports to be a testimonial by any person relating to goods or services” (s29(1)(e)); or
  • “false or misleading representation that goods or services have sponsorship, approval…” (s29(1)(g)).

These provisions may protect the non-consensual use of a person’s image in a commercial context.  Depending on the nature of the deepfake, an action for misleading and deceptive conduct may also be available.

Is it enough?

Unsurprisingly, the technology used to create deepfakes is far outpacing the development of technology to detect it.

The patchwork of existing laws does go some way to defending against the potential abuse of deepfake technology. Like with many issues in the online space, one of the biggest hurdles that potential plaintiffs will face is identifying the wrongdoer and being able to take enforceable action against those wrongdoers.  

Written by Alexander Ryan and Andrew Hii

Expertise Area
""