26/06/2024

In early June, the Attorney-General tabled a bill in Parliament to strengthen laws targeting the creation and non-consensual dissemination of sexually explicit material online, including material created or altered using generative AI, including deepfakes. 

The Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 (the Bill) is part of broader policy reforms aimed at strengthening online safety and is pitched as forming part of the government’s response to gender-based violence. Regarding the mandate underlying the Bill, the Attorney-General stated:

“the government’s reforms will make clear to those who share sexually explicit material without consent, using technology like artificial intelligence, will be subject to serious criminal penalties”. 

The potential harms associated with deepfakes are becoming more prevalent with the increasing ease with which they may be produced and disseminated, as well as their advancing quality. While the Bill fills an existing gap in Commonwealth laws regarding image-based abuse, questions remain as to its application in practice, as well as how to best respond to harmful deepfakes that don’t fit within its scope.

Background: Non-consensual distribution of images

As we have outlined in previous articles, the non-consensual distribution of intimate images is already criminalised in each State and Territory. Many such offences are drafted broadly enough to capture synthetic imagery, however, this is largely untested. At a federal level, the Criminal Code Act 1995 (Cth) (the Criminal Code) has long outlawed the non-consensual sharing of private sexual material via the aggravated offence of using a carriage service to menace, harass or cause offence. 

However, the ability of the Criminal Code to effectively respond to the nature and volume of image-based abuse has, to date, been unclear. This is partly due to it being an aggravated offence only, and therefore requiring the commission of a primary offence under s 474.17 (which itself is a provision laden with broad language). In addition, there are real questions about whether the Criminal Code currently extends to deepfakes and other synthetic images.

The Bill: Criminal Code Amendment (Deepfake Sexual Material) Bill 2024

The Bill aims to replace the existing aggravated offence with an offence that:

  1. is standalone in nature;
  2. has its own aggravating offences; and 
  3. expressly contemplates material that has been created, or altered in any way, using technology.

The new offence targets the non-consensual sharing of sexually explicit material online without consent. It captures material depicting (or appearing to depict) an adult engaging in a sexual pose or activity, or the private areas of an adult. This would cover still images and videos, but not audio-only media. The law is specifically directed at materials depicting adults, as materials depicting children are already covered by other child-abuse material laws. 

The offence covers the transmission of this material where the person sending the material knows the other person does not consent to the transmission or is reckless as to whether the other person consents. Consistent with the provisions of the Criminal Code being repealed by the Bill, the new primary offence attracts a maximum penalty of six years imprisonment. 

There will also be two aggravated offences to this new offence. The first of which is similar to an existing aggravating offence under the Criminal Code regarding offenders who have received three or more prior civil penalties for image-abuse violation of the Online Safety Act 2021 (Cth) (OSA). The second aggravated offence is enlivened if a person who committed the primary offence also created or altered the offending material. 

It should be noted that the creation and alteration of non-consensual intimate materials is not, in and of itself, criminalised (although other laws, including at the State and Territory level, may apply in this situation). The new laws only apply where a carriage service (e.g. SMS or internet) is used to transmit the images. This contrasts with the revised approach taken recently in the UK, where the creation of non-consensual intimate images is now criminalised separately, and in addition, to offences around their distribution. 

Realism and deepfakes

Neither the proposed Australian offences, nor the UK offences mentioned above, create an absolute prohibition on deepfake content, but rather criminalise conduct where certain material depicts a person who has not given their consent for its distribution. 

The Bill clarifies that it is irrelevant whether the material transmitted is in an unaltered form (i.e. an actual photograph) or has been created, or altered in some way, using technology. It would likely include materials created by non-deepfake technology (such as Photoshop, or a similar technology). A note in the drafting of the Bill clarifies this includes images, videos or audio that have been edited or entirely created using technology (such deepfakes made using AI) to have a realistic but false depiction of the person. However, it is the use of ‘depicts, or appears to depict’ in the drafting of the primary offence that formally captures synthetic media. 

The Explanatory Memorandum for the Bill notes that ‘appears to depict’ is intended to cover material where the depiction ‘reasonably or closely resembles an individual to the point that it could be mistaken for them’, ensuring that ‘the offence applies where an image is obviously a representation of a real person’. Even with this context, it remains somewhat unclear the extent to which:

  • the offending imagery needs to have a high degree of realism or believability, 
  • whether the offending imagery needs to be of a person that can be identified by any of the creator, sender or recipient of the material; or 
  • whether the context in which the materials are transmitted is a factor that may be taken into account in this assessment (e.g. would including a person’s name in the file name, or in a covering message, affect whether these laws are engaged).

As we discussed in response to the Taylor Swift deepfake controversies earlier this year, the realism of a synthetic image is not the sole determinant of its ability to cause harm. An image may have a visual quality that would lead most viewers to identify that it is not an authentic photograph. However, the depiction may still be of such an offensive nature that the person whose likeness is depicted has their privacy and dignity impacted. 

Exceptions

The Bill also contains various exceptions to the application of the primary offence, including if a reasonable person would consider transmitting the material to be acceptable. Establishing such an exception would include consideration of: 

  • the nature and content of the material;
  • the circumstances in which the material was transmitted;
  • concerning the person depicted, or appearing to be depicted, in the material;
    • the age, intellectual capacity, vulnerability or other relevant circumstances of the person;
    • the degree to which the transmission of the material affects the privacy of the person; 
    • the relationship between that person and the person transmitting the material.

The objective test is said to allow for community standards and common sense to be considered in relevant decisions. While considering such factors may be intended to safeguard those at increased risk of falling victim to such offending, it is questionable whether they may also work against other classes of persons. For example, is it less likely an offence will have been committed if the alleged victim of the offence had previously shared other intimate images of themselves? It is also unclear in what situations a ‘reasonableness’ test may be applied to override a person’s consent or lack of consent. 

It was also said to ensure the transmission of adult pornography is not unduly criminalised given the ‘impossibility of a person assuring himself or herself that the person depicted’ has consented to its transmission. This drafting will affect the ability to establish these offences in contexts where the person depicted is involved in its production on a commercial basis, such as under subscription-based services, but have other persons circumventing technical and commercial controls placed on such material. 

Out of scope deepfakes

If enacted, the Bill would further underscore the unacceptability of such offending at a federal level, complimenting civil schemes provided under the OSA, as well as State and Territory criminal laws. While the title of the Bill may include ‘deepfake’, the amendments fill a particular gap in Australia’s existing legislative response to intimate-image abuse, rather than to deepfakes themselves.

It also remains the case that the Bill only captures a specific type of deepfake (sexually explicit material) in particular circumstances (where the depicted person does not consent to their transmission), with other forms of harmful deepfakes out of scope. By way of example, the Bill does not deal with deepfakes that are designed to humiliate or embarrass the subject individual or that are designed to mislead or deceive, whether for financial (including blackmail), political or other purposes.

In the context of the Commonwealth Criminal Code, these kinds of harmful uses of deepfakes may or may not amount to an offence under the more general 'using a carriage service to menace, harass or cause offence' under section 474.17. Depending on the given context, individuals impacted by other forms of deepfake harms may be limited in terms of the legal protection that can practically seek (see our earlier article on this topic).

Even in the context of domestic violence, the core policy driver behind the Bill as noted in the government’s press release, individuals can produce and share deepfakes that are highly abusive and offensive in nature, while safely steering clear of these new offences. For example, synthetic material could attempt to humiliate or intimidate the depicted person in other ways, such as by depicting them with graphic injuries or as the victim of physical abuse. Such categories of offensive but non-sexual material are recognised elsewhere in the Criminal Code, such as in its extensive definition of child abuse material.

Outside of a domestic violence context, there remain widespread concerns about the role of deepfakes in political contexts. Recent public hearings by the Senate Select Committee on Adopting Artificial Intelligence (AI) noted the dangers of deepfakes in manipulating information and voters’ electoral processes. For example, a politician could be convincingly represented saying an offensive or factually incorrect statement. In this context, the Senate Committee discusses numerous responses, ranging from better awareness of the threat, to completely banning the use of AI in election campaigning.

The new laws, if passed, will address a very particular type of deepfake (one that already had considerable coverage via existing laws). While the Bill fills a gap, there remain no laws that specifically respond to the harms arising out of the use and dissemination of non-sexually explicit deepfakes. As with any regulatory response to harmful speech, such laws would need to be carefully scoped and calibrated to not undermine legitimate forms of expression. 

""