25/06/2021

The Australian Code of Practice on Disinformation and Misinformation

The Australian Code of Practice on Disinformation and Misinformation (the “Code”) commenced on 22 February 2021, around 12 months after the Australian Government asked digital platforms to develop a voluntary code to address disinformation and misinformation and assist users of their services to more easily identify the reliability, trustworthiness and source of news content.

The request is part of a broader Australian Government strategy to reform the technology and information dissemination landscape and implement certain recommendations made by the ACCC in the Digital Platforms Inquiry

The Australian Communications and Media Authority (ACMA) oversaw development of the Code, which was developed by industry association. Later this month, ACMA is due to report to the Australian Government on whether the actions and responses of those digital platforms that have adopted the Code sufficiently respond to the concerns identified by the ACCC regarding harmful misinformation and disinformation. The Government will then consider the need for further measures including potentially, the introduction of mandatory regulation.

So far, voluntary signatories to the Code include Twitter, Google, Facebook, Microsoft, Redbubble, TikTok, Adobe and Apple. However, the Code encourages all other participants in the digital information sphere to use the Code as a guide to best practice in developing their own response to the evolving challenges of harmful disinformation and misinformation.

In anticipation of the ACMA report, this article explains the key features of the Code, key themes of the first signatory reports submitted under the Code, how the Code fits into the broader regulatory landscape of online content in Australia, and what’s next.

Key features of the Code

The Code targets misinformation and disinformation which threatens to undermine democratic and policy making processes or public goods such as public health, safety, security or the environment (Harm).

Both misinformation and disinformation are defined as digital content that is verifiably false or misleading or deceptive, propagated by users of digital platforms and is reasonably likely to cause Harm. “Misinformation” is often legal digital content and may not have clearly intended to cause Harm, whereas “disinformation” captures behaviours which intend to artificially influence users’ online conversations and/or to encourage users of digital platforms to spread digital content, and the propagation of digital content via spam and other forms of deceptive, manipulative or bulk, aggressive behaviours. 

There are two key requirements for signatories under the Code:

  1. each signatory must commit to the Code’s core objective: to provide safeguards against harms that may be caused by disinformation and misinformation (Core Objective).  However, signatories can make additional, optional, commitments relevant to how content is delivered on their platform; and
  2. each signatory must submit a report within three months of their adoption, and annually thereafter. The reports must describe the signatory’s progress towards achieving the outcomes of the Code, and explain why it has not elected to make any optional commitments beyond the minimum commitment (where that is the case). The first set of reports were published at the end of May 2021 (https://digi.org.au/major-technology-companies-adopt-new-australian-code-of-practice-on-disinformation-and-misinformation-copy).

As the Code is voluntary, a signatory may withdraw from the Code or a particular commitment at any time.

What do the commitments actually require?

The Core Objective of the Code requires a signatory to:

  • develop and implement measures which aim to reduce the spread of, and potential exposure of, users of digital platforms to disinformation and misinformation;
  • publish policies, procedures, guidelines and information relating to the prohibition of, and management of, user behaviours and content that may spread disinformation and misinformation via its services;
  • maintain a reporting function that enables users to report on behaviour or content that violates any policies; and
  • publish publicly available reports regarding detection and removal of content that violates platform policy.

The additional objectives a signatory may choose to adopt will depend on how content is delivered on its platform (e.g. a user-generated content platform would likely adopt different measures to a search engine). For example, a signatory could commit to implement measures that empower consumers to make better informed choices of digital content. This could take the form of returning diverse perspectives on matters of public interest in response to an online search request, a signal to users indicating the credibility of a news source, or enabling a user to check the authenticity or accuracy of online content or to identify the source of political advertising.

The Code also provides examples of how the objectives and outcomes may be met, but these are guidelines only and each signatory can decide how it will moderate harmful misinformation and disinformation on its platform.

More than just arbitrary evaluation

Digital platforms (including the current signatories) already evaluate and moderate content to varying degrees in accordance with their own discrete policies. The Code offers industry a clear, unifying objective, without reducing the flexibility digital platforms have in the way they choose to moderate content. It encourages them to be more accountable in their role as facilitators of free speech and open exchange of opinion, information debate and conversation, by turning the focus on their response to Harms caused by disinformation and misinformation. The common reporting requirement will also help digital platforms and other stakeholders to evaluate their practices against the practices of other industry participants.

The Code goes beyond self-assessment. It requires an industry facility for non-compliance to be established within six months of its commencement (approximately by the end of August 2021), and the establishment of an industry sub-committee to review the actions of signatories and their compliance on a six-monthly basis. The Code will also be reviewed after one year, and then every two years after that, by industry, government and other stakeholders. These additional mechanisms should encourage greater responsiveness and engagement from signatories.

The outcomes of the first reports

The eight current signatories to the Code issued their reports in response to the Code in May 2021. All of the signatories were able to demonstrate to varying degrees how their existing policy framework is aligned to the core objective and the other objectives they choose to adopt, and explained their areas of focus for the future.

Key themes from the reports include:

  • signatories commonly rely on their policy framework – for example, Facebook’s Community Guidelines, to regulate content on their platforms;
  • it is common for signatories to have an internal reporting and complaints system to respond to disinformation and misinformation;
  • it is common for a combination of human and technological measures to be used to monitor content for misinformation and disinformation;
  • that responses to handling identified misinformation and disinformation vary from platform to platform, but common actions include: removing the content, applying a label or warning to the content, and suspending or deleting accounts where there have been multiple policy breaches; and
  • that COVID-19 has been a particular challenge, with surges of virus, medical and vaccine related misinformation and disinformation.

How the Code fits into online safety regulation in in Australia

The voluntary Code joins other laws and regulations which operate to address online safety in Australia, such as the Enhancing Online Safety Act 2015 (Cth) and the impending Online Safety Act, but sets itself apart by focusing on digital content that is false, misleading or deceptive. 

What’s next

The measures taken by digital platforms in response to violations will be under review, with the ACMA poised to assess the effectiveness of the Code in addressing disinformation and misinformation on digital platforms later this month. Last year, the European Commission assessed the effectiveness of the similar, voluntary EU Code of Practice on Disinformation. It found a number of shortcomings owing to that code’s self-regulatory nature, and recommended a number of measures to improve consistency of the key concepts of misinformation and disinformation, and appointing a regulatory body to enforce compliance with the code. Two things the Australian Code itself, does not prescribe.

 

Lesley Sutton , Samantha Karpes and Claire Arthur

Expertise Area
""