24/05/2021

On 12 May, the UK Government published its Online Safety Bill, designed “to make the UK the safest place in the world to be online while defending free expression, to create an overarching framework for the regulation of social media” 

The Online Safety Bill regulates two types of services if they have “links to the UK”:

  • A regulated user-to-user service, which is “an internet service by means of which content that is generated by a user of the service can be uploaded to or shared on the service by a user may be encountered by another user”. This definition would cover social media platforms such as Facebook; and
  • A search service, but not including search functionality limited to the one website or database.

A service will have the requisite “links to the UK” if the service has a significant number of users in the UK, or UK users form one of the target markets, or the service is capable of being used in the UK, or there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the UK from content on and access through the service. This would pretty much seem the whole Internet!

The Bill imposes a set of broadly worked ‘duties of care’ on service providers.

Risk assessments

All providers of user-to-user services must undertake a thorough risk assessment to discover whether “illegal content” can be accessed on their service. Illegal content is terrorism content, child exploitation content, other criminal content specified by the Secretary of State or content which is otherwise illegal and for which there is an individual victim (e.g. domestic abuse).

Providers of user-to-user services that are likely to be accessed by children also must assess the risk that content which is harmful to children can be accessed on their service. A service can only be treated as not being accessible by children “if there are systems and processes in place that achieve the result that children are not normally able to access the service or part of it”. The Secretary of State can designate types of content as harmful to children, but service providers also must make their own judgments about whether there are reasonable grounds “to believe that the nature of [other non-designated] content is such that there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on a child of ordinary sensibilities” or as having a particular effect on children with “a certain characteristic (or combination of characteristics)”: e.g. LGBTIQ+ children. This harm assessment has to be undertaken separately for each age group of children.

Category 1 providers, which are designated by the Secretary of State and are likely to be the big social media platforms, must undertake a risk assessment that content harmful to adults can be accessed on their service. Again, the Secretary of State can designated types of content as being harmful to adults, but the service provider also must make their own judgments about other content can have a significant adverse impact on ‘adults of ordinary sensibilities’ or adults with particular characteristics. 

These risk assessments must looks at:

  • the level of risk “taking into account algorithms used by the service, and how easily, quickly and widely content can be disseminated by means of the service”; and
  • the extent to which the service’s own functionalities facilitate the presence of and dissemination of illegal content, and how the design of the service could be changed to mitigate risk.

Ongoing Safety duties

Service providers are then subject to ongoing ‘safety duties’ to ensure the relevant type of harmful content remains inaccessible on their service. These ongoing safety duties require:

  • taking ‘proportionate steps to mitigate and effectively manage the risks of harms to individuals’ from the relevant type of harmful content;
  • using ‘proportionate systems and processes’ to minimize the presence of the harmful content, minimize the length of time individual postings of that content are online, minimize dissemination and swiftly take it down; and
  • processes set out in their service terms for complaints to be made about the presence of the harmful content.

All providers of user-to-user services also subject to positive duties to protect freedom of expression and privacy when designing the above safety policies.

Search engines are subject to parallel duties in relation to illegal content, content which is harmful to children and protecting freedom of expression and privacy.

Big Platforms to protect democracy

Potentially much more far reaching, Category 1 providers have duties to ‘protect content of democratic importance’. This is content published by a ‘recognized news publisher’ that appears to be “specifically intended to contribute to democratic political debate in the UK”. A recognized news publisher includes the BBC, other broadcasters and other print or online news businesses in the UK which are subject to industry conduct codes.

As well, Category 1 providers also have a duty to protect ‘journalistic content’, which is much more broadly defined to include content which “is generated for the purposes of journalism”. The UK Government has said that “[c]itizen journalists’ content will have the same protections as professional journalists’ content”.

These positive duties on Category 1 providers require them to:

  • operate systems and processes to ensure the importance of the free expression of content of democratic importance or journalistic content is taken into account in their decision making;
  • apply their processes and systems in the same way to a diversity of political opinion;
  • set out in their service terms a process by which posters of content which they claim is protected by these duties can complain if the content is taken down, and if the complaint is upheld, to quickly restore the content; and
  • set out in their terms how content is to be identified as journalistic content (i.e. how to identify ‘citizen journalism’).

Providers of user-to-user services are also subject to duties to have effective complaints mechanisms in place and maintain records of their compliance with the other duties.

Powerful regulator

The regulator, OFCOM, has power to ‘turn the dial up or down’ on the risk assessments and the ongoing safety duties through ‘risk profiles’ that OFCOM determines for different types of services. OFCOM can categorize services having regard to the user base, business model, governance and risk levels. If OFCOM adjusts the risk level of a type of service, this will trigger a new round of risk assessments to be undertaken by the providers of that service type.

Fines of up to £18 million or 10% of a provider’s annual global turnover apply for breaches. A court on application of OFCOM can also restrict a service’s availability or require access to a service’s platform and systems.

 

Read more: UK Online Safety Bill: a set of broadly worked ‘duties of care’ on service providers

""