23/02/2021

The UK markets regulator, the Competition & Markets Authority (CMA), has published its report into the use of algorithms and their potential harm to consumers (Algorithms: How they can reduce competition and harm consumers). It’s a good primer on where use of algorithms in B2C applications can go wrong.

Why is choice architecture important?

Algorithms are commonly part of a broader ‘choice architecture’. Algorithms provide value to consumers by aggregating, organising, and ranking options that best meet consumers’ needs. Choice architecture also includes the position of the “Buy” button on a shopping website, the colour of an information banner and a default payment method. It also can include tripping reminder or follow up messages and offers. Without choice architecture, the Internet would be unnavigable. But the CMA’s report identifies a number of potential harms that result from the use of algorithms, including:

  1. Personalised pricing, which although it may be efficiently used in some instances, can lead to harm where there is insufficient competition, or the personalised pricing is particularly complex.
  2. Non-price related personalisation, such as biased rankings of search results or presenting limited options to consumers.
  3. Discrimination through the use of protected attributes (like gender, race or sexuality) or a proxy for a protected attribute (as an example, the report notes the use of location can be problematic as it may be closely correlated with race in parts of the UK).
  4. Reduced competition through self-preferencing such as where a platform provider favours its own products over those of its competitors.
  5. Temporary suppression of the visibility of market players at critical junctions through updates and amendments platform algorithms such as changes to gateway platforms like Google and Facebook.
  6. Predatory pricing using data science and machine learning to identify customers likely to switch services which may be exploited by incumbents to offer short term discounts of marginal value in the long term.

Competition & Markets Authority choice architecture concerns 

The Competition & Markets Authority expresses particular concern with ‘dark patterns’. Consumers have behavioural biases or vulnerabilities that can be exploited through different choice architecture. Dark patterns are user interface designs that trick users into making unintended and potentially harmful decisions: for example, by exploiting their limited attention, loss aversion, or inertia, leading to susceptibility to default options:

“To illustrate, the CMA investigated potentially misleading scarcity messages on hotel booking websites, such as “X other people are viewing this hotel right now” and “X rooms left”. The CMA found that these claims could be incomplete. For example, the number of ‘other people’ viewing the hotel at times might include those looking at different dates or different room types, or there may be rooms at the hotel available on other platforms.”

The CMA also gave an apparent example of autonomous collusion where machine learning algorithms, even without explicit communication or intention of the part of the developers, learn to tacitly collude. German retail petrol stations increased their margins by around 9 percent after adopting algorithmic pricing, but only where they faced local competition. More strikingly, the margins do not start to increase until approximately a year after market-wide adoption, suggesting that algorithms took some time to learn that there were other algorithms out there setting petrol prices and how to behave with each other!

The CMA contends that market forces alone will not be sufficient to address these harms. This is because in most cases the algorithms are completely opaque and accordingly, consumers and customers are unable to discipline firms. The businesses likely to be employing algorithms are also likely to play key roles in the economy, further strengthening the case for intervention.

Competition & Markets Authority identifies four potential AI interventions

First is simply to provide guidance, backed by the ‘bully pulpit’ of the regulator. The Competition & Markets Authority endorses the various ethical frameworks which have been developed for the use of AI which have a strong emphasis on transparency and explainability. The challenge is to drive these ethical frameworks into AI development.

Second is the continued enforcement of existing laws. Current regulatory processes already exist to identify and remedy existing harms from breaches of consumer or competition law. Potential remedies include compelling the operator of the offending algorithm to disclose information, conduct ongoing monitoring and risk assessments and/or make changes to the algorithmic system.

Third is the need for regular monitoring. While CMA endorsed the use of regulatory “sandboxes” where the algorithm can be operated in a test environment, it also cautioned that one-off audits can quickly become outdated because algorithms may evolve quickly due to updated datasets or dynamic models.

Fourth, the CMA argues for greater resources to enhance its digital capacities, including by recruiting data scientists and engineers. This is a common request from regulators throughout the world. The CMA also emphasised that algorithms have global reach and national regulators need to work together.

In summing up, the Competition & Markets Authority notes the ever growing importance of algorithms in the way markets and businesses operate. The CMA is careful to emphasise that algorithms mostly produce benefits for consumers and increased profits for business. While there are considerable risks of harm, the CMA is not advocating a new set of regulatory powers but relying on governance and transparency backed by its current competition and consumer powers.

 

Read more: Algorithms: How they can reduce competition and harm consumers

""

""