The internet and technology innovation has revolutionised the way society today communicates, works and socialises. At present, on a global basis:
- there are billions of people using the internet, including to access news, shopping, social media websites and applications;
- year on year, the number of active users is continually growing;
- each year, on average, the amount of active time a user spends using the internet increases;
- more and more children, aged between 8 and 15 years old, have their own smartphone that can be used to browse the Internet; and
- largely, you can access anything on the internet, from anywhere.
This raises the question - how do you police the entire internet?
Well, we are no closer to finding an answer to this question that ensures everyone has a safe experience online, or a means to guard against all individuals who use the Internet for immoral or illegal purposes. This issue has gained increasing prominence, highlighted by the Cambridge Analytica data scandal and the online footage of the Christchurch terror attacks, which sparked strong responses from governments, industry and the public.
In Australia, the legal framework for online content regulation has strengthened recently, conferring greater powers on regulators to control the kind of content available. For example:
- The Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 (Cth) (the Abhorrent Violent Material Act) created two new offences for internet service providers (ISPs) hosting abhorrent violent material. ISPs commit an offence by failing to ensure the expeditious removal of content and by failing to notify the Australian Federal Police of details of the material within a reasonable time of becoming aware of it. The Abhorrent Violent Material Act also gives the eSafety Commissioner the power to issue notices to ISPs where it is satisfied that abhorrent violent material is available.
- On 15 July 2019, the Enhancing Online Safety (Protecting Australians from Terrorist or Violent Criminal Material) Legislative Rule 2019 made it a power of the eSafety Commissioner to protect Australians from access to certain violent material. In exercise of this function, on 9 September 2019 the eSafety Commissioner ordered that ISPs block eight websites hosting video footage of the Christchurch terror attacks. Although this was the first time the eSafety Commissioner has ordered that websites be blocked, the Commissioner has previously issued four notices made under the Abhorrent Violent Material Act to websites that had been hosting material relating to child abuse.
Although eSafety Commissioner Julie Inman Grant has stated any decision to block websites must meet an extremely high threshold and only in extraordinary circumstances, there is a clear message that, where required, the framework is now there to allow action to be taken when it is justified.
Will this be the catalyst for a more interventionist and pro-active approach to regulating the dissemination of offensive or illegal online content? Will we see more industry-self-regulation? It remains to be seen how these powers will be exercised going forward, but it is clear there is an appetite for Government to work with industry, including ISPs and social media companies, to develop protocol to enable swifter responses should similar issues arise in the future.
Authors: Tim Gole, Kevin Stewart + Lucy Cottier