13/02/2024

The recent incident involving fake explicit images of singer Taylor Swift has brought the challenge of ‘deepfakes’ once again into the mainstream.

The incident, along with a second, lesser-known “deepfake” debacle following Swift’s recent win at the Grammy Awards, provides apt (albeit unfortunate) grounds to explore the evolving meaning of deepfakes and the regulatory responses to their harmful distribution online.

What happened?

In late January, a collection of sexually explicit, AI-generated images of Swift began going viral across several social media platforms. Independent reporting claims that the images originated within an online forum where members regularly produce synthetic non-consensual sexual imagery and challenge each other to circumvent safety controls placed on popular AI-based creative tools.

The images gained media attention after they were shared on X (formerly Twitter), prompting the company to restrict searches for not only the images themselves, but any searches involving Swift. While more difficult to locate today, like almost all viral material, the images are unlikely to ever be entirely scrubbed from the internet.

Just two weeks later, Swift took the stage to accept several accolades at the Grammy Awards. Shortly after, footage of her acceptance speech was crudely altered and uploaded to X.  In the clip, Swift is purported to be endorsing ‘White History Month’. Despite the spread of this post being relatively limited in comparison to the earlier sexual imagery, it was still picked up in the mainstream media, with one major Australian masthead commenting that Swift’s Grammys speech had been “edited by trolls in new deepfake”.

Deepfakes and “shallow fakes”

Like many other emerging online harms, there is no uniform definition of what constitutes a deepfake. At the technical end of the spectrum, some contend that deepfakes only refer to synthetic material created through the use of machine learning techniques that enable the production of highly realistic fakes.

However, since the term’s popularisation in 2018, it has been used to label a much wider category of material. This is in part owing to continued advances in technology, as well as the changing settings in which they appear. However, it is also due, in part, to ‘deepfake’ becoming a convenient term to describe any duplicitous media, however rudimentary.

Some experts in the field argue that deepfakes, by definition, are the product of some form of AI technique. This is not just because of the increased realism that these techniques provide, but also their speed and widespread accessibility. This ties back to the idea that industry-grade photo and video editing tools, because of their cost and complexity, are less prone to misuse than systems with low-to-no barriers to entry.

Applying the above to the two recent examples involving Swift, the explicit images can be more accurately described as an example of a deepfake than the altered Grammy speech can. The synthetic explicit images of Swift are understood to have been made using mainstream, generative AI creative tools. Despite a general quality of airbrushed imitation, the images bore a high degree of likeness to Swift.

Conversely, the Grammy speech “deepfake” consists of authentic footage of Swift with the audio replaced with a virtual woman’s voice. The audio is not only poorly synched with the footage, but sounds almost nothing like Swift. It better reflects an example of a mainstream ‘shallow fake’, similar to those frequently used to mock politicians using run-of-the-mill video editing software.

Definitional arguments about the use of deepfake may appear trivial, however, recent history has shown how an overuse or misuse of certain terminology can ultimately rob the language of its meaning and impact. Consider the term ‘fake news’, which was once used in earnest to describe fabricated and misleading news stories. In 2024, you’d  rarely hear an expert in the field use this term, and are more likely to hear it as a defensive retort to legitimate accusations of wrongdoing. 

For present circumstances, the upshot is that these latest incidents using Swift’s likeness have more to do with personal dignity (and misogyny) than they do with legitimate deception. Indeed, when it comes to the practical risks posed by deepfakes on society at large, the risk of increased uncertainty and eroded trust has always been just as acute as  the risk that people actually believe that what they are looking at is real.

Regulating synthetic media harms

We have previously detailed the various ways in which Australian laws can respond to deepfakes and other synthetic harms, particularly those of a non-consensual sexual nature. This includes the nationwide criminalisation of non-consensual distribution of intimate images, the ability for the eSafety Commissioner to issue take-down notices over such material and pursue individual offenders, and longstanding avenues for redress through defamation, copyright and consumer-protection laws.

Australian criminal laws and the Online Safety Act 2021 (Cth) (OSA) provide good examples of how tech-neutral drafting can enable the regulation of novel harms, with both applying to deepfakes despite not expressly referencing them (or AI at all for that matter). While neither can be said to have solved the explicit deepfake challenge, awareness and enforcement are increasing year on year.

Notably, the term ‘deepfake’ was reflected in the Australian common law for the very first time last year in eSafety Commissioner v Rotondo [2023]. After removal and remedial notices issued under the OSA were ignored, eSafety commenced unprecedented proceedings against the individual responsible for a website featuring synthetic non-consensual sexual imagery that purported to depict various women, including high-profile Australians.

The defendant was charged with contempt of court after failing to comply with injunctive orders requiring that the relevant synthetic imagery be removed and for similar material not to be published online in the future. Ultimately the offending material was taken down and the defendant issued fines totally $25,000, with further civil proceedings pending.

eSafety’s approach

In speaking with the eSafety Commissioner, Julie Inman Grant, about the broader fight against non-consensual sexual imagery online, she made the point that how eSafety responds is highly informed by the individual who has been targeted. Their approach is also cognisant of the frequent overlap between such offending and patterns of family violence.

Often, the removal of offending material is the prime objective for affected individuals. Sometimes this is achieved through voluntary cooperation with the individual perpetrator.  eSafety possesses (and has used) formal powers too, such as removal notices, remedial directions or formal warnings of civil proceedings.  

This graduated approach to enforcement also exists on the online platforms where this material is shared. eSafety’s informal engagement with online platforms has a 90% success rate for getting non-consensual sexual material taken down, with resistance mainly arising from smaller websites purpose-built for hosting pornography.

Are further reforms required?

These latest explicit deepfakes of Swift have prompted commentators around the world to query the need for better regulation. In comparison to most other countries, Australia is relatively advanced when it comes to having suitable laws in place and well-resourced public authorities ready to enforce them. However, this is mostly focused on the harms associated with sexualised deepfakes and other synthetic non-consensual intimate imagery.

Regulating non-sexual deepfakes is much more context-dependent and relies more on individual appetites to commence proceedings (such as in defamation or passing off). Yet there appears to be a growing appetite for restricting deepfakes in relation to these non-intimate types of harm, particularly in the context of political misinformation.

In the past 6 months, several high-profile political deepfake incidents have occurred in Mexico. Last October the mayor of Mexico City was forced to defend himself against the contents of an audio recording he claims to be artificially generated, but that others have been unable to form a consensus on. More recently, Claudia Sheinbaum, a Presidential candidate for in the upcoming Mexican general elections, was realistically depicted in a financial scam deepfake.

With similar events occurring the world over, proposals for regulation are also becoming more targeted toward deepfakes as a distinct harm. For instance, the ‘DEEPFAKES Accountability Act, introduced in the US Congress last year, would require creators of deepfakes to digitally watermark such content and make it a crime to fail to identify deepfakes in certain malicious settings. This idea of labelling is also seen in the context of ‘content provenance’, which involves tracing and recording the origins of digital content and validating its authenticity. Cross-industry efforts already underway from the likes of Adobe, Microsoft and the BBC aim to develop a standard for content attribution.

Meanwhile, South Korea has recently banned political campaign material that includes deepfakes for the 90 days before an election, demonstrating the potential to address the issue on a more targeted basis.

Outright bans may seem a blunt response, however, particularly when politicians and businesses alike are using the same technologies for seemingly legitimate purposes. The same tools that can produce harmful fakes can just as readily create benign, useful content with the consent of the individuals being depicted.

For instance, the mayor of New York City recently used AI to generate campaign material that appears to be spoken in his voice, but in the various languages used amongst his constituents. While some may find the practice unethical or unnerving, others will reasonably argue that the public benefit to such uses is apparent.

Australian reforms and deepfake harms

Bringing things back to Australia, several law reform processes presently underway have an opportunity to impact  how deepfake harms are dealt with locally. For example:

  • Statutory reforms coming out of the Privacy Act Review have the potential to expand the legal options available to individuals who have experienced serious invasions of privacy or improve their control over personal information in ways that could avoid misuse in the first place.
  • The Federal Government’s consultation on local AI regulation (albeit less advanced) may lead to targeted reforms, such as those clarifying the interaction between deepfakes and misleading and deceptive conduct, or codes of practice that impact the design and limits of AI systems.
  • An upcoming statutory review of the OSA may consider whether AI needs to be more expressly contemplated in Australia’s online safety regime. We expect to see more ex ante powers discussed as part of this, building off of eSafety’s existing advocacy for ‘Safety by Design’. It is worth noting, however, that to date eSafety has managed to integrate AI regulation into its enforcement of the OSA, irrespective of express provisions. 
  • The ongoing development of targeted misinformation and disinformation laws is also likely to overlap with deepfakes, providing clearer pathways to responding to deepfakes beyond those of a purely non-consensual, sexual nature. 

Given the ongoing definitional uncertainty attached to deepfakes, and the pace of technological change, tech-neutral regulation that addresses the harmful consequences of synthetic material (rather than the systems used to create them) may continue to provide the most sustainable path forward for regulation.  Conversely, some may argue that solely focusing on harms might be seen as a “whack-a-mole” approach to addressing the problems of deepfakes, and that some form of regulation of the underlying technology might be warranted.

When it comes to generative AI, we have assisted many of our clients understand their responsibilities, including those relating to the deployment of AI and the management of user-generated content, and have assisted them to develop internal policies and approaches that seek to ensure that they are meeting their legal and ethical requirements while being able to experiment and take advantage of the benefits of this new technology.

Authors: Bryce Craig and Andrew Hii

""