07/04/2021

The UK’s Alan Turing Institute recently released a report on online hate.

Why hating online is worse

Online hate has 4 unique characteristics which can make it more dangerous than offline hate: (1) the ease with which purveyors of hate can access audiences, (2) the size of the audiences they reach, (3) their anonymity and (4) the instantaneousness of sending hate.

Anonymity is often said to be the most combustible element: “people feel much more comfortable speaking hate online as opposed to real life when they have to deal with the consequences of what they say”.

However, increasingly it is the trans-border fungibility of online hate which stands out:

“Interconnected hate clusters form global ‘hate highways’ that—assisted by collective online adaptations—cross social media platforms, sometimes using ‘back doors’ even after being banned, as well as jumping between countries, continents and languages.”

Algorithms also are now the Internet’s self-generating engines of hate:

“recommender systems exploit individuals’ initial mild preferences for, or curiosity about, certain ideologies and outlooks by showing them increasing amounts of similar (and potentially more extreme) content in an attempt to hold their attention.”

The ways of hating online

The report proposes four classes of online hate:

  • Threatening: content which expresses intention to engage in harmful actions against a group or members of a group on the basis of their group identity.
  • Inciting: content which explicitly encourages, advocates or justifies harm to be inflicted on a group or members of a group on the basis of their group identity.
  • Demonising: content which is explicitly hateful but does not involve threats or incitement. It is likely to inspire hatred in others and thus may have similar harmful effects.
  • Animosity: content which expresses prejudice against a group but does not explicitly attack them. It includes content which others a group by emphasising their difference, strangeness or unimportance or by mocking and undermining their experiences.

‘Threatening’ and ‘inciting’ behaviour are concepts known in existing criminal law. But the two last categories present the greatest challenges in designing rules around online hate:

  • demonising a group may ‘inspire’ harm, but fall short of ‘inciting’ physical harm that would attract existing criminal law;
  • some may see animosity as clearly hateful whereas others will see it as “legitimate critique”. Addressing animosity in a hate control regime raises the greatest risk that individuals’ freedom of expression will be constrained.

Humour, sarcasm, irony and self-hate

The easy criticism made of those who advocate measure to control online hate is that they are humourless. However, as the report points out:

“[memes] have played a key role in facilitating the movement of hateful ideas from the margins to the mainstream of society as memes are often engaging, humorous and innocuous – yet can easily contain deeply prejudicial ideas. Memes are particularly challenging to moderate as hate can be expressed through an otherwise benign image and benign text – but which become hateful when considered together.”

That said, the report acknowledges the need to be careful to exclude:

  • the ‘playful’ appropriation of hate language: a good example is how the LGBTI community has taken over the terms ‘queer’, ‘dyke’ and ‘poofter’;
  • self-hate: individuals may criticize a group to which they belong, which is an important part of civic discourse. In some cases, such criticisms will appear similar to hatred made against their community by others;
  •  satire and irony: can be powerful tools to undermine and challenge hate, but if viewed by unaware audiences, it could have similar effects to intended hate.

The Harm Paradox

Most control measures for online hate assess the seriousness of the harm as a multiple of likely hazard (high on hate content) and influence (high on reach and importance of the person posting).

So a distinction can be made between:

  • Dangerous speech is highly hazardous content which has substantial influence. It is seen by and negatively impacts many people.
  • Bedroom trolling is content which is highly hazardous, as with dangerous speech, but in contrast reaches and influences very few people.

But at the heart of this analysis is the risk of the ‘harm paradox’:

“Most content is assessed based on what it expresses and how it is expressed rather than its empirical effects. As such, it is often unknown whether content that is labelled ‘harmful’ has actually inflicted harm.”

The harm paradox can lead us to focus on the crudely, blatantly hateful loud voice, but miss that more covert hate communicated with more cleverness, such as in a meme, can be more harmful.

So what’s the answer?

The report recommends:

  • Have a clear definition of online hate. The report highlights Facebook’s approach, noting that the platform’s policies have evolved substantially over the past decade and increasingly show sustained engagement with ‘tricky’ issues.
  • Have rapid, scalable systems to identify online hate. The report notes that there have been significant advances in use of AI. Facebook detects 95% of removed hate content by AI. But the report also cautions that AI is not good a detecting satire.
  • Handle online hate through a proportionate response. The report cautions that a too heavy handed approach to removing content not only curbs freedom of expression, but can be counterproductive by fuelling conspiracy theories about the management and governance of online spaces. More graduated remedies could include constraints on hosting, viewing, searching or engagement with content, such as liking/sharing.
  • Enable user complaints: provide transparency on how and why content moderation decisions are made in response to complaints and a robust and accessible review procedure. But the report also notes that ‘shadow banning’ - where users are banned but do not know it and still believe they can post live content – can be an important investigation and compliance tool.
  • Alternatives to content takedown: the report notes that research shows that ‘media literacy’ training materially improves users’ ability to separate mainstream from ‘fake’ news. ‘Counter speech’ may not change the bigots’ minds, but reach the larger reading audience and have a positive impact on the discourse within particular online spaces.

 

Read more - Understanding online hate

Authors: Rebecca Dune and Pater Waters

""

""