The European Commission recently published its first assessment (the Report) of its code to address fake news on social media platforms: Code of Practice on Disinformation (the Code).

What is the Code of Practice on Disinformation?

The voluntary Code of Practice on Disinformation introduced two years ago is the global first of its kind. Signatories include Facebook, Google, Twitter, Mozilla, Microsoft, TikTok, and the World Federation of Advertisers.

The Code has five ‘pillars’: 

  1. Scrutiny of ad placements: online platforms agree to block or suspend imposter websites monetising false information. For example, between January and August 2019, Twitter rejected 11,307 ads for violating its Unacceptable Business Practices Policies.
  2. Political advertising and issue-based advertising: sponsored political ads need to be clearly labelled and online ad libraries made available. For example, between March and September 2019, Google labelled more than 185,000 election ads in the European elections.
  3. Integrity of services: platforms are to take steps against manipulative techniques, such as by using AI to detect and block fake accounts. For example, Facebook disabled 2.19 billion fake accounts in Q1 2019.
  4. Empowering consumers: platforms commit to investing in technology to rank trustworthy information sources. They collaborate with fact checkers and notify users of potentially false or misleading information. For example, Microsoft News partners with over 1000 news sources which it has vetted.
  5. Empowering the research community: platforms provide researchers and fact checkers with platform data access and promote media literacy initiatives. For example, Google has provided access to its visual deep-fakes data to enable researchers to design tools to detect synthetic videos.

Where the Code falls short

The Report concludes that, despite the pioneering initiative to create the Code, the volume of fake news about Covid-19 (the “infodemic”) has highlighted the Code’s shortcomings. The Report notes that right across Europe conspiracy theories about 5G installations spreading Covid-19 and disinformation blaming ethnic and religious groups for the pandemic have fanned hate speech, social polarisations and sometimes, public violence.

The Report’s diagnosis is that, owing to its self-regulatory nature, the Code has the following shortcomings:

  • Lack of key performance indicators to assess online platforms’ effectiveness in countering fake news;
  • Lack of diverse stakeholders, particularly from the online advertising sector and instant messaging platforms such as WhatsApp;
  • Lack of uniform definitions on key concepts such as “misinformation”, “influence operations”, “issue-based advertising”, “inauthentic behaviour”, “malicious bots”, “indicators of trustworthiness” and more;
  • Ambiguous procedures and shared commitments;
  • Insufficient scope - for example micro-targeting of political advertising is not currently covered by the Code. Online political advertising is able to target segments of voters based on collated personal data and refined psychological profiling to customise political messages, thus potentially skewing democratic electoral processes; and
  • Insufficient access to data enabling independent evaluation of threats posed by fake news.

Improving the Code of Practice on Disinformation

The Report floats some ideas for beefing up the Code:

  • Establishing a colour-coded system to signal the credibility of websites:
    • A blacklist identifying websites systematically conveying disinformation;
    • A grey list for websites occasionally spreading false information; and
    • A white-list for trusted websites and advertisers.
  • Establishing a universal system on all platforms allowing users to flag possible misinformation to be fact-checked, and a public register to record the verification outcome in a timely and visible manner;
  • Introducing harmonised definitions on the key concepts about what is misinformation to ensure consistency in approach;
  • Introducing KPIs such as service-level indicators and structural indicators – for example, measuring the total turnover of advertising operators from ad placements, and the lost revenue due to accounts being closed on the basis of misinformation. Or measuring the number of authoritative and misinformation sources in a sample group and identifying the source and language of the misleading information;
  • Subjecting political advertising to uniform registration and authorisation procedures – including extending offline regulation of political campaigning to online, such as spending limits, the timing and allocation of subsidised political advertising on traditional broadcast and press media; and
  • Appointing a regulatory body to monitor and enforce compliance with the Code instead of the current peer review system.

The Europeran Commission remains committed to the Code as a “unique and innovative tool in the fight against online disinformation.” Although the EU’s relationship with social media platforms is often contentious, the Report acknowledges that the Code has “prompted concrete actions and policy changes by the platforms aimed at countering disinformation.”

Read the report: Assessment of the Code of Practice on Disinformation – Achievements and areas for further improvement