26/04/2021

Video-sharing platforms (VSPs) – e.g. TikTok – have become ubiquitous. A recent UK study found that 97% of the UK online population accessed these services in the last year.

Beyond VSP crazes such as how to make animated Picrew avatars of yourself, the VSP experience can be disfigured by unsavoury, hateful or illegal content. Over 16% of teenagers using VSPs declared exposure to porn, which is similar to the proportion of adults who have declared exposure to it. Interestingly, about 60% of users remain unaware of ways to stay safe on these platforms.

What is the UK doing?

The UK’s new VSP Framework came into force on 1 November 2020. Ofcom, the UK’s communications regulator, recently published a guideline on how it will administer the VSP Framework.

The guideline explains what content can constitute harmful material under the VSP Framework. Harmful material includes:

  • relevant harmful material, which includes incitement of violence or hatred against groups, and content which would be considered a criminal offence; and
  • restricted material, which is material which is deemed inappropriate for under-18s.

The guideline also provides an explanation of the measures VSPs can take to protect users from that harmful material, as well as guidance on how to implement those measures effectively. Such measures include:

  • measures relating to terms and conditions;
  • measures relating to the reporting, flagging or rating of content;
  • access control measures such as age assurance and parental controls;
  • complaints processes; and
  • media literacy tools and information.

VSPs are required to determine whether it is appropriate to take a measure, according to whether it is practicable and proportionate to do so.

While protecting minors is an objective of the VSP Framework, the word “pornography” only features 8 times in the approximately 70-page guideline. Ofcom says that access to pornography must be subject to the strictest access control measures:

… if a VSP has restricted material on its service that is of a pornographic nature, providers should have a robust access control system that verifies age and prevents under-18’s from accessing such material.

The VSP Framework attempts to minimise minors’ exposure to these videos by enforcing age requirements to access certain online material. The guideline differentiates between age assurance and age verification:

  • age assurance is a broad term that refers to the spectrum of methods that can be used to be informed about a user’s age online (e.g. self-declaration of DOB); and
  • age verification is a form of age assurance where a user’s age is established to the greatest degree of certainty practically achievable and is currently considered the strictest form of access control (e.g. matching a user to their official age on their passport, licence or credit card).

Ofcom’s guideline was written after the UK government dropped a plan to use strict age filters for minors. Critics warned that minors would have found it relatively easy to evade the age filters by using VPNs or turning to websites not covered by the law. They also raised privacy concerns amid suggestions that websites could ask users to upload their passport or licences.

Instead, under the UK’s Online Harms approach, responsibility for controlling access by minors has been thrown back to the VSPs themselves. The VSPs are subject to a general statutory ‘duty of care’: “take reasonable steps to keep users safe, and prevent other persons coming to harm as a direct consequence of activity on their services.” The ‘online harms’ that are covered by the duty of care are broad and range from child sexual exploitation and abuse, terrorist content, online anonymous abuse, cyberbullying to online disinformation, amongst others. The duty of care is qualified by ‘so far as reasonably practicable’.

Ofcom says this statutory duty means that VSPs must stay informed of emerging technological developments and solutions for online safety in making decisions about the solutions they need to adopt to satisfy the duty of care. For example, for some time, there have been online parental tools which allow parents to link their child’s account to theirs. The next generation of filters rely on AI to assess the likely age of the user based on their use of language in response to questions, or on analysis of their faces.

What is Canada doing?

In Canada, lawmakers are attempting to attack the problem at its source: the porn-based VSPs.

In 2021, a Canadian Parliamentary committee examined the conduct of MindGeek – Pornhub’s parent company based in Canada – including allegations of facilitating and distributing child porn, animal porn and non-consensual sexual activity. They directed that over 10 million videos be deleted immediately and imposed stricter penalties on these porn websites for non-compliance with take-down guidelines. Credit card companies, such as Visa and Mastercard, then cut ties with the websites and launched their own investigations.

What is Australia doing?

Australia has been at the forefront of regulating online content and ensuring online safety for minors. As discussed in Gilbert + Tobin’s article by Andrew Hii and Bryce Craig (The Most Important Regulator You May Have Never Heard Of), Australia appointed the world’s first regulator dedicated to online safety – the eSafety commissioner.

In 2019, the Department of Home Affairs proposed that a facial recognition tool could be used before viewers can access online porn and some forms of online gaming. Given the general dislike of giving information to the government, this proposal was not well-received.

Following this, in 2020, an Australian Parliamentary committee recommended mandatory age filters be used. The committee acknowledged the force of the UK criticisms of age filters (e.g. that they can be circumvented by children and privacy issues), but it thought that most younger children would be unlikely to have sufficient technology skills to find their way around the filters – many of us might doubt that conclusion from having experienced children out-perform us on technology!

The committee recommended the eSafety commissioner create a “roadmap” for the creation of an age filter within 12 months. It also directed the Government’s Digital Transformation Agency to create a standard for online age verification and raised the possibility that third-party providers could be involved in offering such services.

Key takeaway

Easy online access to pornography by minors is troubling. But in the ‘moral panic’ this generates, it can be too easy to reach for technology solutions which seem to present a ‘quick fix’.

As seen with the UK government’s recent failure, government intervention does not seem to be the answer – or at least not the full answer.

An alternative view worth considering comes from Tim Norton, a board member at Digital Rights Watch, who said: imposing an age filter is a technical fix for a social issue. Instead of investing in age filters, investment should be made in better sex education and funding for programs that break down social stigma and allow parents and children to be skilled in navigating such situations.

 

Read more - Consultation: guidance for video-sharing platform providers on measures to protect users from harmful material

""

""