18/09/2023

The Atlantic Council, drawing on a task force of forty experts across industry, civil society, academia, and philanthropy, has recently released a report arguing that we are at a pivotal moment in the evolution of governance of online spaces:

“[a] rare combination of regulatory sea change that will transform markets, landmarks in technological development, and newly consolidating expertise can open a window into a new and better future, in which the next wave of connective technology brings innovation and systemic resilience into better balance.”

Three hard truths in building trust online

There was consensus amongst the task force members that building a safer online environment must start from the following ‘hard truths’:

  • that which occurs offline will occur online. No technology has solved long-standing and deeply rooted societal problems such as racism, sexism, ethnic hatred, intolerance, bigotry, or struggles for power – and there should be no greater expectation of AI. Even more so, in any democratic society, online or offline, some harms and risks must be accepted as a key principle of protecting the fundamental freedoms that underpin that society: I.e there is no nirvana of digital harmony.
  • but equally the online world – whether in its product designs, operational systems, organizational values andrevenue models - is not value neutral because the products that result do not enter neutral societies. Further, harms are not equally distributed across societies, and marginalised communities suffer disproportionate levels of harm online and off. Online spaces that do not acknowledge or plan for that reality consequently scale malignancy and marginalization by design: i.e. this is more than a ‘sin of omission’; and
  • risk and harm already are at scale and accelerating at an exponential pace, and existing institutions, systems, and market drivers cannot keep pace. Hence, the need for governments and industry to develop innovations in governance, research, financial, and inclusion models which can scale with similar velocity.

Regulation ain’t the complete answer

What’s also telling, given the preponderance of non-lawyers in the task force, is their nuanced view about the role of regulation.

First, “industry will likely continue to drive..rapid changes, but will also prove unable or unwilling to solve the core problems at hand.” So regulation is needed.

But on the other hand, “governmental action has perennially proven incapable of keeping pace with emerging technology (unless that action has been to censor, surveil, block, or otherwise violate fundamental rights and freedoms).” So, the value of voluntary/self-governance initiatives in supporting knowledge exchange and the shaping of norms.

Second, while regulation itself may not be able t keep up, regulation can set principles, benchmarks and goals for industry standards and best practice developed by industry .

But on the other hand, regulation creates its own incentives which can skew effort and focus in efforts to develop online trust by creating a culture of tick-a-box audit and bureaucratic reams of reporting:

“if compliance replaces problem-solving, it establishes a ceiling for harm reduction, rather than a floor founded in user and societal protection. Compliance regimes can calcify reactive practices, diminish C-suite appetite for innovation and proactive approaches to improving T&S, and undermine teams that are seeking to solve the underlying problems enabling harm. Another risk identified was a move away from assessment frameworks, which are by nature forward-looking, and toward audit frameworks, which are focused on current and past practice and narrowly delimit a scope of review.”

We all need a little T&S

The above roads led the task force to the need to expand, embed, reward and globalise the emerging industry practice area of ‘trust and security’ (T&S), which the report defines as follows:

“For decades, an area of specialty and practice that is increasingly referred to as “Trust & Safety” (T&S) has developed inside US technology companies to diagnose and address the risks and harms that face individuals, companies, and now—increasingly—societies on any particular online platform. … Stated most generally, T&S anticipates, manages, and mitigates the risks and harms that may occur through using a platform, whereas “cybersecurity” and “information security” address attacks from an external actor against a platform. A T&S construct may describe a range of different verticals or approaches. “Ethical” or “responsible” tech; information integrity; user safety; brand safety; privacy engineering—all of these could fall within a T&S umbrella.”

The challenge which task force seeks to address is that “T&S expertise has been trapped largely within niche communities of practice inside large companies.” The report makes a number of recommendations about how to “galvanize investments in systems-level solutions that reflect the expanding communities dedicated to protecting trust and safety on the web.”

Open tools

The task force makes the good point that effective T&S is as much a logistics challenge as a policy challenge. The logistics of T&S are complex because they need to iteratively loop, in close to real time, through four distinct goals: detection, enforcement, measurement, and transparency (i.e., documentation/communication) – and do so across a company’s IT systems, both individually and how they interwork. Companies get into a digital world of pain when T&S logistics fail to underpin executive and board digital policy making, for example by being too slow to identify risk, under dimensioning risk or failing to understand what users and consumers regard as unacceptable.

The task force prioritises the development of a suite of open source, basic but useful tools: for example, hash-matching tools that could detect exact and near-exact matches of previously identified content, or tool kits that could help build classifiers to assess new, not previously seen content or behavior.

Standardised tools would assist SMEs adopt more complete, sophisticated online trust policies and practices. But potentially as importantly:

“..standardized models for connecting external civil society (and academic) expertise to teams inside of companies—particularly T&S product and tooling teams—remain a significant and counterproductive gap within industry. The onus continuously rests on civil society—which as a field comprises organizations that are generally smaller, less-well resourced, and navigate challenging operating environments—to adapt to the operational needs of well-funded, empowered corporations. Civil society organizations lack insight into how the feedback they provide is used. Externally facing mechanisms focused on policy development or the reporting of “bad” content have been the most common mechanisms that companies have piloted, but they have not proven to be sustainable or effective, and can be perceived by civil society as token initiatives that pull precious time and focus while offering limited impact in return.”

Debunking technology exceptionalism

The task force candidly acknowledges that “[t]he technology sector has long suffered from the presumption that its problems are novel, and that relevant knowledge must then be developed sui generis in bespoke, tech-centric settings.”

The task force considered that efforts to build online trust needed to draw on the skills, experience and lessons from ’mature, adjacent fields’.

From example, the following lessons can be taken from cybersecurity practice:

The task force also considered there were lessons from the gaming industry that could be applied across digital platforms generally:

  • gaming has expertise in designing for the intentional inclusion of children, including those younger than thirteen, as well as adults. Some gaming developers have gone further and are pioneering prosocial design methods that pre-emptively shape and encourage healthy and inclusive play patterns at all ages.
  • with the increasing popularity of VR, gaming companies are focusing on developing new safety features to protect users in these immersive environments. There are also efforts to improve the real-time monitoring capability in privacy-respecting and less data-intensive ways within games.
  • In the age of generative AI, gaming is already grappling with user-generated content as a threat model.

Cherish your front line workers

The task force points out that “frontline content moderators have been referred to as essential gatekeepers of the internet, assessing millions of pieces of content a day”, but they are often exploited, undervalued and insufficiently protected:

  • given what they see, content moderators, and T&S practitioners generally, face high risks of developing post-traumatic stress disorder, depression, and other psychosocial harms.
  • many content moderators are in the Global South, and can be subject to exploitative labour practices.
  • Increasingly, T&S practitioners who publicly represent a company’s position face targeted public bullying and harassment direct “at influencing behavior, politicizing T&S decisions, dissuading research, and chilling practitioners’ speech and personal ability to continue supporting T&S work.”
  • Tech developers and platform companies can fail to draw on the practical, real world view that content moderators can provide about T&S issues – in part because the content moderators are contract workers and not fully integrated into the tech company.

That old cost centre millstone
Every inhouse lawyer and compliance office dreads the retort that “you are just a cost centre”.

The task force observes that ‘[u]ntil investments in reactive and proactive T&S are established as a requirement for doing business or a de facto generator of long-term value, the incentives structures necessary to ensure better, safer online spaces will continue to fail users—and societies.’

The task force challenges business and investors to develop tools which can capture the value of effective T&S to digital products and projects which:

  • in protecting the company’s reputational risk, avoids the board and management making knee jerk responses which may not reflect the most endemic harms or risks on a platform, but rather overly focus on one isolated incident of particular severity or one particularly controversial decision.
  • better enables more granular, proportional assessment of and investment in T&S, such as by product line: “T&S needs correlate closely with scale, but no bright line delineates where a particular element of growth (revenue, intentional expansion, adoption within new markets, etc.).”
  • helps incentivise decisions about risk and trust much earlier in the tech pipeline, rather than when a finished digital product ends up in the hands of a downstream business. The task force said that it was striking that:

“With a few noteworthy exceptions, the venture capital (VC) investors behind emerging technology either have not prioritized T&S issues or appear to be intentionally indifferent. Privately funded companies face little pressure from investors to demonstrate or design a T&S strategy, and T&S vendors have, with some exceptions, historically struggled to attract significant and continued investment compared to other technologies.”

If you thought all that was hard…

To illustrate its point that risk and harm are already at scale and accelerating at an exponential pace, the task force identified problems with emerging technologies for which it could identify questions to be asked but not yet solutions:

  • federated social media services, like Mastodon and Bluesky they have many of the same propensities for harmful misuse by malign actors as the centralised social platforms, while possessing few, if any, of the hard-won detection and moderation capabilities necessary to stop them. That said, their decentralised nature offers the promise of alternative governance structures that empower consumers, but we are yet to see how that can play out to address harms.
  • eXtended reality (XR) platforms: a differentiating feature of XR from more traditional (or “flat”) spaces is XR’s focus on achieving fidelity, i.e., accurately reproducing or simulating real-world environment, objects, or actions in order to make an XR experience look, feel, and sound as realistic as possible to a user. The neuroscience behind XR can lead to a blurring of what is or isn’t real, and as a result, the consequences of harmful or inappropriate behavior may be more acute.
  • generative AI: it potentially transforms the nature of influence operations online by reducing the financial cost, time, and technical expertise required The increased volume of harder to detect deepfakes risks flooding trust and safety measures. But again there is an opportunity to harness AI to T&S missions: automatically attaching warning labels to potential generated content and fake accounts; improving vetting, scoring, and ranking systems; creating high-quality classifiers in minority languages; and quickly moderating spam and fraud. However, the task force cautions that the “[t]he true answer is that human moderation will remain a critical component of T&S.”

Read more: Scaling trust on the web: Comprehensive Report of the Task Force for a Trustworthy Future Web

""