Two weeks ago we examined the European Commission’s assessment of its online disinformation code. This week we analyse proposals to address fake news in Australia and the UK, which are heading down very different tracks.
First, some terminology: disinformation refers to false information deliberated created and spread; misinformation means false information unintentionally created or spread and malinformation is accurate information spread by malicious actors to cause harm.
Australia: An expanded EU model
As recommended by the ACCC’s Digital Platforms Inquiry, Australia is following the EU model and developing a Code on Disinformation. The Australian Communications and Media Authority (ACMA) has published a position paper (the Paper) with a voluntary code framework for digital platforms to consider. ACMA anticipates digital platforms to work together to have in place a single, industry-wide code by December 2020.
The model code covers news and information (including advertising) that is public or semi-public, shared via a digital platform with the potential to harm individuals and/or society. This is broader than the EU’s Code on Disinformation because it aims to include harms caused by misinformation and malinformation.
ACMA is encouraging online search engines, social media platforms and digital content aggregation services with at least one million monthly active users in Australia to participate: Facebook, Youtube, Twitter, Google Search, Google News, Instagram, TikTok, LinkedIn, Apple News, Snapchat and others.
The model code is an outcomes-based regulatory framework setting out the objectives without being prescriptive about the measures to attain them. ACMA believes this high-level approach is better suited to dynamic and fragmented digital markets.
ACMA recommends adopting a graduated, risk-based approach with different measures to address coordinated disinformation and harmful misinformation. For example, flagging or demoting content, fact checking, preventing monetisation of disinformation, enhancing a complaints handling and independent review process.
UK: A much bigger framework
In April 2019, the UK government published an Online Harms White Paper. It identifies misinformation as one type of online harm which should be tackled within a wider regulatory framework that covers the full gamut of dangerous and unacceptable online conduct, including terrorism, child sexual exploitation, depictions or serious violence, hate crimes and cyberbullying. The White Paper rejected a code-based approach because only a relatively small group of the larger companies tend to be engaged, there is no consistency on online safety issues in platform terms and conditions, and there is no independent forum for consumers to resolve complaints.
The White Paper also rejected ‘tinkering’ with the existing legal liability approach for online content ‘published’ by providers. Because providers are protected from legal liability for any illegal user generated content until they knew or should have known about the illegal content, providers are not incentivised “to make the systemic improvements in governance and risk management processes that we think are necessary.”
Instead, the White Paper proposed a new statutory duty of care requiring providers “to take reasonable steps to keep users safe, and prevent other persons coming to harm as a direct consequence of activity on their services.” This duty of care would apply ‘at large’ because “in some cases the victims of harmful activity – victims of the sharing of non- consensual images, for example – may not themselves be users of the service where the harmful activity took place.”
There will be codes issued by the regulator dealing with specific types of online harm, but providers will still be expected to comply with the overarching statutory duty even if there is no code.
The regulatory framework will apply to companies that provide services or tools that allow, enable or facilitate users to share or discover user-generated content, or interact with each other online, including hosting and instant messaging. However, simply having a social media presence does not mean a business will be regulated.
The regulator will be guided by the principle of proportionality in assessing potential remedies, which will include: notices and warnings, fines, business disruption measures, and Internet Service Provider blocking in serious cases. There also will be personal liability of senior managers.
Providers will be expected to have an effective, accessible complaints system for users to report concerns with online information. The White Paper also canvasses the idea of ‘super complaints’ which would allow designated organisations to bring complaints about ‘systemic’ failures by providers.
The regulator is likely to be ACMA’s UK counterpart, Ofcom. Companies will have an appeal process to challenge the regulator’s decision.
In February 2020 the UK government published feedback from its initial consultation. Respondents have raised concerns about freedom of expression, clarity and certainty for businesses. Respondents would like further clarity on thresholds to trigger potential enforcement action, standing to sue and additional guidance on compliance expectations.
The UK Government is continuing to consult but seems committed to its approach.
Read more: Online Harms White Paper - Initial consultation response
Visit Smart Counsel