To answer this question, the UK’s Centre for Data Ethics and Innovation (CDEI) undertook an experiment with nearly 2000 UK users. CDEI built and tested three prototypes that allowed users to personalise their own settings – or more accurately filters – for online targeting.

The prototypes were all built around the CDEI’s central thesis from its earlier work that, while the current approaches to online targeting deserve strong criticism, users see online targeting as a desirable feature of using the internet, but were concerned about their lack of awareness, understanding and control.

10 CDEI Design principles

To build its prototype ‘active choice’ tools, the CDEI identified the following 10 principles:

  • recognise users’ limited time and mental capacity
  • maximise ease of navigation
  • consider the timing of disclosure – which usually means early in the ‘choice’ journey
  • personalise the content
  • make the information salient or visual
  • check framing and defaults, especially avoid steering decision making by removing defaults and forcing choices
  • make the trade-offs interactive - allow people to interact with, or experience, what the choice means
  • find the right granularity of choice - offering additional choices can in itself can reduce privacy concern and increase willingness to disclose
  • ensure comparability of filtering/privacy options
  • allow people to help their future selves - offer tools for people to set reminders, time-limits or commitments on the choices they set today

The prototypes

The three prototypes were:

  • a slider: over two screens, the user must select a position on two sliders to indicate their privacy and notification choices. The CDEI says this prototype was a “clear call to action and social norm messaging puts emphasis on the ease of making changes.”
  • a private mode: single screen where the user must choose either a ‘regular’ or ‘private’ mode and when either option is selected, information appears explaining what settings will change. The CEDI says this prototype “leverages prior familiarity of the concept of “private mode”, lifted from web browsers, to group choices which are more/less privacy preserving.”
  • Trusted third party choice: The users are shown settings which are (hypothetically) recommended by three different organisations: a consumer organisation, a mental health charity, and a technology company. The CDEI says this prototype by “delegating to a trusted third party minimises effort to understand and enact default settings in multiple domains, which people can then customise.”

The Guinea pigs

The 2000 users who participated in the experiment were broadly representative of the UK population. Users were asked to assign themselves to one of three ‘personas’ that identified their levels of concern about privacy:

  • 25% said they were comfortable sharing their data with companies, and prefers online services to work quickly and easily, even if this means sharing more of their personal data;
  • 62% said they were comfortable sharing some personal data (e.g. their location data via their smartphone) if there is a clear benefit of doing this, but they dislike personalised ads; and
  • 13% said they were uncomfortable sharing any of their personal data with companies online and highly value their privacy.

The three personas were then distributed into a group for each ‘active choice’ prototype and a group for an unfiltered ‘control’ Android handset.

What was tested for

The experiment measured the performance of the prototypes against:

  • Task accuracy: defined as the number of settings choices (out of 4) that participants made in line with the preferences of their selected persona.
  • Feelings of control: “How much control do you feel [the prototype gave you] over the privacy and notification settings when making your choices?”
  • Understanding of consequences: This was assessed as the sum of correct answers to four questions that tested participants’ understanding of consequences: e.g. “based on your choices, what types of advertisement might be shown on this device when browsing the internet?” and “based on your choices, when will the phone give Instagram access to location data?”

The results

All of the prototypes outperformed the control on the three outcomes, with one exception: the trusted third party design did not improve feelings of control.

While there was no clear winner between the three prototypes, the slider did best overall. The following table summarises the results by the three privacy personas:


Task accuracy

Understanding consequences

Feelings of control

Least concerned (25%)

trusted third party

No sig. difference

No sig. difference

Partly concerned (62%)


private mode


Most concerned (13%)




Interestingly, on the task accuracy (i.e. does prototype best reflect the users upfront selection of the persona that describes their level of privacy concern?), the slider did better than the other protypes for those concerned about privacy. This suggests that those with privacy concerns were best at calibrating their own level of access and screening through the slider.

But the reverse was the case with the ‘least concerned’. The privacy mode and, even more so, the trusted third party modes better matched their (lower) levels of concern than they did themselves in using the slider. The CDEI thought the explanation was that these users chose the technology company as their trusted third party and its settings more closely aligned with their own personal (low) settings.


The CDEI says that “[t]he experiment provides evidence that simplified choice bundles can improve the ability of users to choose settings in line with their preferences, better understand the consequences of their choices and feel more in control.”

Probably more interesting is that a Government agency, having criticised current approaches by online providers, then decided to take a more positive step of providing “firms operating online with examples of evidence-based tools and techniques to design user-empowering choice.


Read more: Enabling active choices online: Trialling behaviourally informed prototypes