This article was first published in the Internet Law Bulletin Volume 24 Number 4.

Three takeaways for lawyers:

  • COVID-19 has accelerated many regulatory measures to address harmful false content online. Two of the swiftest examples are Australia and Europe’s disinformation codes of practice. The appropriate scope and enforceability of these codes remains unsettled.
  • Australia’s code is the most expansive one yet. The decision to broaden the code’s scope was arguably overinfluenced by the galvanizing yet reductive “infodemic” metaphor. However, the code’s breadth also reflects the reality that harm can come from false information even when it is spread with good intent.
  • The unsettled nature of these codes’ scope illustrates how challenging the issue is. Careful code review will be vital to ensure that free speech without fear of persecution, privacy and our “collective sensemaking” in times of crisis are not lost to overregulation.


On 2 February 2020, the world was on the cusp of being flung into what will likely be the defining global event of our lifetimes. However, the World Health Organization’s (WHO) 2019-nCoV report that day focused on another phenomenon: the infodemic accompanying the pandemic.1

WHO Director-General Ghebreyesus amplified the gravity of this “over-abundance of information” a few days later when he said, “we’re not just fighting an epidemic; we’re fighting an infodemic. Fake news spreads faster and more easily than this virus, and is just as dangerous”.2

Fast forward to July 2021 and President Biden stretched the analogy even further in saying social media was “killing people” by not taking down vaccine misinformation,3 although he subsequently sought to walk back these comments.4

Social media platforms have not, however, been standing still. Some would argue quite the contrary.5 In February 2021, a year after Ghebreyesus’ words and months before Biden’s, the most expansive code to reduce harmful false digital content was released: the Australian Code of Practice on Disinformation and Misinformation (Code).6 The Code’s scope was greatly influenced by the pandemic.

Tracking the evolution of the Code and related policies illuminates how quickly proposed regulation can change scope, and how challenging this mis(or was it dis?)information issue is.

Why the ACCC recommended a code

As many things in Australia’s regulatory platform sphere, the Code began in the Australian Competition and Consumer Commission’s (ACCC) Digital Platforms Inquiry Final Report. That 623-page June 2019 report was from that far away time pre-COVID. Nevertheless, the ACCC did focus heavily on disinformation, or “[f]alse or inaccurate information that is deliberately created and spread to harm a person, social group, organisation or country”.7

The ACCC found that digital platforms had “considerable influence” in shaping the news Australians viewed, and “to the degree that online consumption makes it harder for public interest journalism to reach audiences, but easier for disinformation . . . to do so, this is clearly a significant public policy concern”. 8

After analysing various Australian and overseas studies, the ACCC recommended a mandatory code of conduct for the digital platform industry to govern the handling of complaints about disinformation.

How the Code’s scope changed

In December 2019, the government responded to the ACCC’s recommendation by asking major platforms to develop a voluntary code for disinformation and news quality.9

On 25 January 2020, the first case of COVID-19 in Australia was confirmed.10 Coronavirus’ impact on the Code’s development was significant.

In June 2020, the independent regulator tasked with overseeing the Code, the Australian Communications and Media Authority (ACMA), said the Code should be broadened to cover misinformation, defined as false, misleading or deceptive information that “has the potential to cause harm to an individual, social group or the broader community”, regardless of whether it intended to cause harm.11 This broadening decision was deeply influenced by the pandemic:

The first half of 2020 has been marked for many Australians by two extraordinary events: the unprecedented summer bushfire season and the COVID-19 pandemic.

Both these events have highlighted the impact and potential harm of misinformation on both Australian users of digital platforms and the broader Australian community.12

Given the experience of both the COVID-19 pandemic and the summer bushfire season, the ACMA considers a focus on disinformation to be too narrow for platforms to adequately address the wide range of potential harms, including from content that has been distributed by those who genuinely believe it to be true and have no intent to cause harm.13

The industry body drafting the Code, Digital Industry Group Inc (DiGi), adopted the ACMA’s proposed scope.

Why did the Code’s scope change?

Back in the Digital Platforms Inquiry, the ACCC recommended the Code should not cover misinformation out of concerns about infringing freedom of expression, particularly to allow discussion without fear of government interference:

The ACCC considers that any intervention directly aimed at affecting individuals’ access to information must carefully balance the public interest with the case for free speech and the right of individuals to choose. In particular, it should avoid the Government directly determining the trustworthiness, quality and value of news and journalism sources.

To balance these competing interests, the recommended code does not include ‘misinformation’ which is defined as false or inaccurate information not created with the intention of causing harm.14

Strikingly, when the ACMA said the Code should cover both disinformation and misinformation, its main solution to the free speech concern was to have a flexible Code so the platforms themselves could decide where to “draw the lines” in appropriately balancing free speech and intervention.15

The Code’s expanded scope was not the only change from the ACCC’s recommendation. The Code also differs markedly on enforceability.

The ACCC recommended the Code be mandatory for all platforms with more than 1 million monthly active users in Australia. The implemented Code is instead voluntary, and when platforms sign up, they can choose which Code commitments they adopt.

Further instead of the “significant enforcement and penalty provisions” that the ACCC recommended, the implemented Code has a “facility” for addressing non-compliance and a sub-committee of signatory and independent representatives to monitor and review compliance.16

The shift from a mandatory Code with a narrow scope to a more expansive voluntary Code reflects how the more strictly enforceable regulation is, the less expansive it can be.

The only other Code of this kind, the European Code of Practice on Disinformation, is similarly deciding where to land. The European Commission’s May 2021 recommendations for its Code seem to be trying to do both moves, in making its Code more enforceable by appointing a regulator to enforce compliance, but also expanding the Code to address misinformation, although in proportionate actions such as empowering users to distinguish misinformation from authoritative content, meaning “not all commitments of the Code would apply to misinformation”.17

Australia’s more voluntary Code instead uses disinformation and misinformation almost interchangeably, leaving platforms to decide what measures apply to what content.

What scope is best when the issue involves such a delicately balanced dance?

It’s not an easy question

The ongoing sway of these Codes’ scope reflects how challenging this issue of addressing harmful false digital content is.

The ACCC and the ACMA both emphasised the need to “carefully balance” the public interest in regulating content with rights to expression and privacy. The fact that despite that similarity, the former recommended a mandatory, narrow Code and the latter a voluntary, expansive one, confirms the complexity.18

Both the UTS Centre for Media Transition and the Australian Broadcasting Corporation’s papers on the Code emphasise the need for “careful consideration” in determining the Code’s scope, neither conclusively saying what approach is best.19

Academia on the topic is similarly nuanced. Simon and Camargo’s analysis of the term “infodemic” argue there is a “delicate trade-off between generating attention, but without incurring the risks [of conflating the spread of information to the spread of a virus].”20

Starbird et al’s research into disinformation’s participatory nature describes the challenge as a “knot . . . as platforms attempt to navigate a difficult compromise in relation to existing legal statutes and dynamic social norms around ‘freedom of speech’”.21

Least hopefully, the Centre for Data Innovation and Ethics’ forum on Artificial Intelligence’s role in addressing misinformation on social media found “altogether, participants were pessimistic about our collective capacity to resolve the challenges of misinformation in the immediate future”.22

Is disinformation vs misinformation even the right question to ask?

The UK government would likely answer no. Unlike the European Union and Australian dis/misinformation focus to determine scope, the UK government’s Online Safety Bill takes a harms-based approach that argues “inaccurate information” can be harmful whether it is disinformation, intending to harm, or misinformation, sincerely passed on. The Bill therefore aims to reduce harm, while prioritising a free, open internet and freedom of expression.23

The ACMA had said the UK approach “significantly informed the ACMA’s expectations [of the Code DiGi was yet to draft]”. This can be seen in how the ACMA described misinformation as part of the problem (despite the use of infodemic that suggests a “homogenising” of complexity that Simon and Camargo warn against):

Labelled by the World Health Organisation as an ‘infodemic’, [COVID-19] has demonstrated that . . . Malicious campaigns by state actors and scammers are only part of the problem and misinformation spread by ordinary users presents a substantial risk of harm.24

Starbird et al argue that attempting to regulate information in relation to authenticity and intent has “inherent difficulties”:

. . . our work reveals entanglements between orchestrated action and organic activity, including the proliferation of authentic accounts (real people, sincerely participating) within activities that are guided by and/or integrated into disinformation campaigns . . . Platform policies designed around rooting out “coordinated inauthentic behavior” would have difficulty addressing these campaigns once they have reached this level of maturity.25

Yet they acknowledge a campaign-focused policy has downsides:

A policy like this could empower the platforms to take action based on the provenance of information—eg within a campaign designed to mislead . . . rather than the truth value of a piece of content or the authenticity/sincerity of a specific account.

However, this approach leaves the platforms in a position of taking action to remove or reduce visibility of content that may be shared or even produced by authentic accounts of sincere online activists. And this might put the platform’s policy at odds with commonly held values like “freedom of speech” and platform goals such as providing a place for activists (including those in oppressed groups) to congre- gate and organize.26

Again, the dance continues.

So what should be the final move?

The scope of regulatory measures worldwide for harmful false content online continues to move. This unprecedented pandemic has injected further gravity to the question. The high information uncertainty and anxiety that comes with viral outbreaks can lead individuals to look for information to give certainty and fill that void, meaning we are particularly vulnerable to disinformation.27

Our increased collective anxiety and uncertainty can also stoke moral panics, elevating the risk of implementing measures that restrict rights such as free speech. The right to express yourself in public without fear of persecution, government interference or legal sanction is perhaps especially important during a contagious global outbreak, when government power over individual freedoms inevitably, and usually for legitimate public health reasons, massively increases.

Measures such as the Code also risk restricting privacy, and important processes such as “collective sensemaking”. We search for, disseminate and synthesise content to help us decide whether to act; from deciding to evacuate in a hurricane to deciding to be vaccinated.28 The first sentence of the ACMA’s paper on the Code is in fact, “[d]igital platforms are a key source of news and information for many Australians”. There is much we may lose if regulation goes too far.


The dance is not over, and there are no easy solutions. Australia’s Code is still to be reviewed. Given our understanding of these phenomena developing almost literally in real time, that reviewal process is even more vital. However, finding a rational, clear policy approach is made even more difficult by colourful but inaccurate metaphors like “infodemic”. There is a risk of what sociologist Corcuff termed “bulldozer concepts”, which are so all-encompassing that they flatten out and homogenise all complexity.29 Just another step for policymakers to consider as they decide their final moves.


  1. World Health Organization (WHO), Novel Coronavirus(2019- nCoV) Situation Report — 13 (2 February 2020), www.who. int/docs/default-source/coronaviruse/situation-reports/20200202- sitrep-13-ncov-v3.pdf.
  2. WHO, Munich Security Conference Speech, 15 February 2020, www.who.int/director-general/speeches/detail/munich-security- conference.
  3. M Parker and J Sink, “Biden Says Social Media ‘Killing People’ With Virus Fiction” Bloomberg 17 July 2021 www. bloomberg.com/news/articles/2021-07-16/biden-says-social- media-falsehoods-on-covid-are-killing-people.
  4. K Watson, “Biden softens comment about Facebook “kill- ing people” because of COVID misinformation” CBS News (17 July 2021) www.cbsnews.com/news/biden-facebook-covid-
  5. J C Wong “Tech giants struggle to stem ‘infodemic’ of false coronavirus claims” The Guardian 10 April 2020, www. theguardian.com/world/2020/apr/10/tech-giants-struggle-stem- infodemic-false-coronavirus-claims.
  6. DiGi, Australian Code of Practice on Disinformation and Misinformation (February 2021).
  7. Australian Competition & Consumer Commission, Digital Platforms Inquiry Final Report (June 2019) at 616.
  8. Above, at 358.
  9. Australian Government, Regulating in the digital age –Government Response and Implementation Roadmap for the Digi- tal Platforms Inquiry (2019).
  10. Department of Health, “First confirmed case of novel coronavirus in Australia” media release (25 January 2020) www.health.gov. au/ministers/the-hon-greg-hunt-mp/media/first-confirmed-case- of-novel-coronavirus-in-australia.
  11. ACMA, Misinformation and news quality in digital platforms in Australia — a position paper to guide code development (June 2020), 3.
  12. Above, at 2.
  13. Above n 11, at 20.
  14. Above n 7, at 370.
  15. Above n 11, at 3.
  16. Above n 7, at 371; Above n 6, at 7.4.
  17. European Commission, European Commission Guidance on Strengthening the Code of   Practice   on   Disinformation (26 May 2021) 5.
  18. Above n 11, at 26.
  19. Australian Broadcasting Corporation, ABC Submission to DiGi Draft Industry Code on Disinformation (November 2020), 2.1; UTS Centre for Media Transition, Discussion Paper on an Australian Voluntary Code of Practice for Disinformation (October 2020) 11.
  20. F Simon and C Camargo, “Autopsy of a metaphor: The origins, use and blind spots of the ‘infodemic’” (2021) New Media and Society; see also Gilbert + Tobin, A Belgiorno-Nettis and P Waters, The perils of labelling COVID misinformation as a ‘social media pandemic’, 27 July 2021, www.gtlaw.com.au/ knowledge/perils-labelling-covid-misinformation-social-media- pandemic.
  21. K Starbird, A Arif and T Wilson, “Disinformation as Collaborative Work: Surfacing the Participatory nature of strategic information operations” (2019) Proceedings of the ACM on Human-Computer Interaction.
  22. Centre for Data Innovation and Ethics, The role of AI in addressing misinformation on social media platforms (August 2021)/li>
  23. UK Minister of State for Digital and Culture, Draft Online Safety Bill (May 2021) See also Gilbert + Tobin, Code of Practice on Disinformation: Stemming the tide of fake news: Part II, 29 September 2020, www.gtlaw.com.au/insights/code-practice-disinformation- stemming-tide-fake-news-part-ii.
  24. Above n 11, at 13.
  25. Above n 21, at para 5.2.
  26. Above n 21, at para 5.2.
  27. K Starbird, “How a Crisis Researcher Makes Sense of Covid-19 Misinformation”, Medium (9 March 2020) https://onezero. medium.com/reflecting-on-the-covid-19-infodemic-as-a-crisis- informatics-researcher-ce0656fa4d0a.
  28. K Starbird, E Spiro and K Koltai “Misinformation, Crisis and Public Health –   Reviewing   the   Literature”   MediaWell (25 June 2020) https://mediawell.ssrc.org/literature-reviews/ misinformation-crisis-and-public-health/versions/v1-0/; above n 21.
  29. See Above n 20.