In January 2021, the UK competition regulator, the Competition & Markets Authority (CMA), invited feedback on a research paper on the potential impact of the use of algorithms from a competition and consumer perspective. This is part of building up the knowledge base of the new Digital Markets Unit set up with the CMA to oversee the UK’s new regulatory regime for digital markets.

The CMA recently released a summary of responses to its consultation.

The responses identified three types of algorithms causing the most concern to respondents:

Recommender Systems

These are systems which aim to predict users' interests and likes. Respondents expressed concern that companies deploying these systems could have substantial influence over customers, particularly dominant firms.

Ranking Algorithms

These algorithms specify product positions in listings. The CMA’s paper had expressed concerns around the issue of restriction of access to customers through these algorithms.

Some respondents noted that there was a lack of transparency about how platform's algorithms worked, the likely effects of changes made to them and the fact that there is no warning given to publishers when changes were going to be made. An example given was comparison-shopping websites: competing service providers may have a reduced incentive to innovate in order to compete for user attention and loyalty if ranking algorithms favour a platform's own shopping comparison service. This could then reduce consumer choice and lead to increased costs if a lack of competition meant there was no downward pressure on prices.

Pricing Algorithms

These algorithms set individual prices for products that a user sees. There was debate over ‘good’ and ‘bad’ outcomes of personalised pricing.

Some respondents thought that “personalised prices for individuals would reduce consumer welfare by offering the individual the maximum price that individual is willing to pay, thereby leaving that individual with no consumer surplus.” This is a polite way of saying that an algorithm may, based on personalised factors unrelated to their use of a service, decide that a customer fits a profile of ‘unlikely to shop around’. Other respondents thought pricing algorithms could promote economic efficiency by rewarding good customer behaviour, such as risk or financial management.

While some argued for a ban on pricing algorithms (at least in non-financial settings, others suggested a restriction on the inputs into pricing algorithms.

While some respondents thought that the risks of AIs colluding on pricing were no greater than in the ‘real world’ (i.e. by humans), there was a general concern that pricing algorithms could learn to collude autonomously, and that tacit collusion would be able to occur in a concealed way with dominant firms profiting the most.

Collusion between algorithms raise interesting legal questions about how to adapt current collusion tests which are built around humans interacting:

On tacit collusion, respondents noted that the definition of an ‘agreement’ would need to be considered in an algorithmic context, as would how to differentiate treatment of algorithmic and human interactions under the law. In addition, respondents noted that regulators would have to consider whether the designers of algorithms intentionally design them to learn to collude and to consider the role of intent in and of itself.

Beyond pricing, personalisation by AI also raised other concerns:

..respondents highlighted the negative effects of personalisation, particularly on the marketplace of ideas. They noted that if consumers do not know what is available to other people due to a lack of transparency, they are not able to freely choose the information they see.

The CMA’s invitation to ‘dob-in’

The CMA asked respondents to comment on specific target areas and companies/firms which the CMA should investigate in relation to competition concerns with algorithms.

Several respondents raised concerns about the lack of transparency around the data used to optimise prices for new customers in general insurance markets, and the inability of consumers to opt out of their data being used.

Others also suggested that the CMA investigate YouTube’s demonetisation of content, such as the company's banning of certain words without clear justification. There were also claims by some respondents about Amazon's practices of permitting counterfeit products to be sold by third-party vendors and only taking action in one jurisdiction (rather than all jurisdictions in which Amazon operates) and using its insider knowledge of third-party vendor profits and margins to create cloned own-brand products.

Investigation and compliance techniques

Respondents said that, given the unique challenges of algorithms, the CMA needed to gather datasets over time and use these for future inspections and investigations, as a way to create the required infrastructure for inspection.

Some of the ideas put forward were:

  • the MarkUp’s Citizen Browser is a custom web browser through which a nationally representative panel of paid users shares real-time data from their social media accounts with The Markup to form statistically valid samples of a population to understand how algorithms operate.
  • Transparency could also be encouraged through open registers of algorithmic systems, such the Amsterdam Algorithmic Register, in which the city discloses information about algorithms it uses.
  • Respondents noted that guidance and clarity was needed around how competitive companies could show they had followed various guidelines, to ensure they were not undercut by other companies that took shortcuts. This could be addressed by requiring disclosure of standardised information about how algorithms are developed or trained, as well as checks for known biases or equalities issues.
  • Some respondents considered that regulators could define clear benchmarks of algorithmic accountability that would act as a minimum level of self-policing firms are required to do: i.e. a ‘safe’ list of lawful and acceptable algorithmic systems, alongside a ‘no go list’ of use cases that are clearly unfair or anti-competitive.

Where to from here?

It is unclear whether this feedback will help provide the basis for a full-scale investigation by the DMU into Big Tech companies and their use of algorithms, or if the concerns raised by respondents will be generally addressed through the suggested policies from the consultation paper.

In press interviews following the release of the research paper, chief executive of the CMA, Andrea Coscelli, said that the regulator plans to mount several probes into internet giants including Google and Amazon in the coming months. Coscelli said that the CMA was looking not just to replicate probes already in existence elsewhere, but to seek out areas that had not yet been investigated.

Post-Brexit, the CMA is reportedly planning to mirror a continuing investigation of Amazon by the EU. However, Brussels is reported to be struggling to gather sufficient evidence that Amazon’s algorithm boosted its own products over rivals, partly because it had difficulty understanding how the company's algorithms functioned.


Read more: Algorithms: how they can reduce competition and harm consumers

Expertise Area