27/11/2020

Has #stopthehateforprofit faltered in the UK?

Last week, a draft code on managing online disinformation in Australia was released for consultation.  The proposal for an Australian disinformation code was prompted by the UK Government’s proposal for an Online Harms Bill, which followed the death of a 14-year old girl after she viewed online images of self-harm.

The UK Online Harms Bill is now unlikely to come into effect until 2023 or 2024.  In response, the chair of the House of Lords oversight committee said, “I’m afraid we laughed”.  While the Government says that COVID-19 is to blame for the delay, the risks of COVID vaccines being undermined by ‘anti-vaxxers’ means legislative action on online misinformation is urgent.  What has gone so awry in the UK?

The Online Harms White Paper (the White Paper) was promoted by the UK Government as the first attempt globally to address a wide range of ‘online harms’ (for example, child exploitation, terrorist propaganda, pro-suicide content, and online disinformation) in a “single and coherent way”. The White Paper’s ‘unified theory of harm’ is to be built around a statutory duty of care on online service providers to protect users, backed by a range of remedies, including substantial fines.

The consultation drew over 2400 responses from large tech businesses, small-to-medium enterprises, academics, think tanks, children’s charities, government organisations and individuals. In February 2020, the UK government published its initial response to the consultation. The full response to the White Paper consultation was promised in the UK Spring (March to May) but was not delivered.

In May 2020, the House of Lords Democracy and Digital Technologies Committee said the UK government had “failed to get to grips with the urgency of the challenges of the digital age” and should immediately publish draft legislation.

Concerns raised about the UK Online Harms Bill

Most of the criticisms went to the ‘big, bold idea’ at the heart of the Online Harms Bill – the statutory duty of care.  The Child Rights International Network (CRIN) argued, in effect, that the statutory duty of care was neither ‘fish nor fowl’ (or their digital equivalent):

“The nature of the “duty of care” within the Online Harms White Paper is not clearly addressed. It does not appear from the functioning of the duty set out within the proposals that it is intended to constitute a common law duty of care that may act as a basis for a claim of negligence…Nor does the “duty of care” appear to act as a basis for criminal liability…The primary effect of the duty…appears to be to set the scope of regulation and the remit and mandate of the regulator. This aim would be more clearly achieved by doing so explicitly, rather than clothing this purpose in the language of a duty of care and therefore risking misunderstanding and misinterpretation.”

Others argued that the lack of certainty about the scope of the statutory duty of care would be a potential threat to freedom of speech. There were fears that digital platform providers may be overcautious when filtering content, to avoid fines for causing what may be vaguely-defined online harms.

Complications such as these are unsurprising, given that, as the CRIN observed, “the concept of harm is at the core of the regulatory model, but the White Paper does not provide a definition of the term or set the scope of its application…. the regulatory model is based on a fundamental uncertainty”.

The UK Government’s February response flagged a fairly significant change in direction by:

  • encompassing the protection of users’ rights online, including freedom of expression, as an overarching principle of the regulation of online harms;
  • winding back the power to require removal of content to illegal content only, and not content which is legal but has the potential to cause harm, which instead will be managed by requiring processes from the platform provider which promote consistency and transparency;
  • no longer providing for the regulator to investigate individual complaints, in recognition of the concerns raised about freedom of expression; and
  • strengthening of the redress mechanisms of the platform providers under which users can report harmful content, including to require reasons for decision by the platform provider.  

Without going into the details of the draft Australian code, it appears some lessons have been taken from the faltering UK Online Harms Bill process. The code is targeted on disinformation – unlike the UK proposal which attempts to ‘boil the digital ocean’ by covering a wide range of harms. The Australian code also seems to be centered on the systems and processes to be put in place by online service providers, which is where the UK Online Harms Bill looks like it will be ending up following feedback. 

 

""

""