As discourse continues to emerge about the intersection between AI and public policy, it is becoming evident that regulators will need a suite of new tools and approaches to ensure that people are appropriately protected from harm. We have previously argued that it would be naïve to think that competition law can regulate disrupted industries without itself being disrupted.
There is concern that the increased personalisation made possible through highly tuned algorithms (which do not need to rely on personal information traditionally protected by privacy laws) may have the effect of negatively distorting competition. A group of researchers from the Oxford Internet Institute led by Johann Laux have proposed a new tool that adapts the Herfindal-Hirschman Index (HHI), a cornerstone of competition analysis, into a novel metric which they have titled the ‘Concentration-after-Personalisation Index’ (CAPI). They suggest that the CAPI could be used by regulators to discover, and consequently dilute, highly concentrated pockets of personalisation that could be harming consumer welfare.
What is personalisation, and what is the issue with it?
Firms gather and process large amounts of consumer data, which is then parsed by AI, with the ultimate aim of making their interactions with consumers highly personalised, such as by showing the user advertisements for products which they may be particularly interested in, or by modifying prices based on the individual customer.
While acknowledging that consumers may benefit from a better matching of ads with their preferences, Laux et al argue that “[t]oday’s ad-technology can predict consumer behaviour and biases beyond what consumers themselves can be expected to know or understand.”
There are a variety of harms that have, more generally, been argued to arise from personalisation, such as a reduction in personal privacy, or the emotional manipulation of users. Of concern to Laux et al, are situations where welfare is more likely to be shifted from consumers to producers, than otherwise would occur without personalisation.
This can occur in two ways according to the article. First, in regards to personalised pricing, firms may be able to place consumers in small, homogeneous segments and charge them much closer to their ‘willingness to pay’, thus reducing consumer surplus. Second, in regards to personalised advertising, Laux et al stated that in a highly targeted pocket where a user is only shown advertisements from one seller of a particular product, the relevant product market may effectively resemble a monopoly from the consumers perspective, as the consumer is not aware of alternative suppliers. It follows from this that a highly concentrated pocket with only a few firms, may also resemble a more concentrated market to consumers.
The CAPI: reimagining an old metric for the digital age
The HHI is a time-tested mainstay in a competition regulator’s arsenal that is used in assessing the degree of concentration in relevant markets by squaring the market share of each competing firm in the market and then summing the resulting numbers. The number can range from 0 to 10,000 - the higher the HHI number, the more concentrated a market is said to be (i.e. a market with an HHI of 10,000 would be an absolute monopoly).
So, why doesn’t HHI work for AdTech? According to Laux et al, HHI is looking from the issue through the ‘wrong end of the telescope’ (emphasis added):
“In the extreme case, an individual consumer may be targeted by one seller with one product or service only. For such a ‘targeting pocket’ to occur, it would not be necessary for a single firm to have gained significant market power…. The HHI would therefore not detect concentration in the market, and we would not speak of an actual ‘monopoly’.”
Laux et al have modified the HHI into the CAPI, which measures the diversity of advertisements experienced by a consumer with the ultimate aim of detecting highly concentrated pockets. The econometric underpinnings of the model are relatively complex, so the following overview will only involve a cursory explanation of the model.
The CAPI appears to involve a two-step process that calculates an ‘HHI within an HHI’. It firstly treats each individual consumer as a distinct ‘market’ by assessing the concentration levels of advertising experienced by each individual, and then assesses the individual markets in aggregate. Importantly, Laux et don’t appear to be saying that each consumer is, in fact or ultimately as a legal concept, a ‘market of one’ for the purposes of competition policy, but rather this is how they make the HHI methodology account for the relative concentration of advertisements as they appear to consumers. Hence, the CAPI spins the telescope around from suppliers pumping out advertisements for a similar product to what each consumer sees.
The difference between the HHI and the CAPI is drawn out if we consider a case where three mattress suppliers each advertise to approximately one third of the population (and all three together advertise to the entire population). Using the HHI would essentially involve dividing the shares of advertising evenly among the firms, with each having an approximate 33% market share, even if all those consumers are only seeing ads from one of the three sellers due to high personalisation of the adverts. However, the CAPI adjusts for the personalisation, and if consumers are only shown adverts from one seller, the CAPI would show a much higher level of concentration, reflecting the limited amount of effective competition from the consumers perspective.
A worked example
A worked example used in the article also helps to illustrate how the CAPI works, and how it differs from HHI.
Imagine a three-stage scenario, again involving the online advertisement of mattresses. In stage one, there is only one mattress store, Dreamy, which advertises randomly (i.e. not using targeted advertising) to 50% of the population. In stage two, a competitor, MyMattress, enters and instead of random advertising, it uses some personalisation in an attempt to specifically target half the total population, such as women. In stage 3, another competitor, SleepTight, enters, and utilises an extreme targeting strategy and only targets a specific tenth of the population, being consumers highly likely to purchase their mattresses. The results show that, while the HHI shows a marked decrease in concentration as more firms enter, the CAPI decreases to a lesser extent to reflect that some individuals within the population may only see advertisements from one or two companies and therefore they may experience a much more concentrated market.
Laux et al. do not support a total ban on personalisation because there are consumer benefits to be had. They are also express concern that partial bans which outlaw use of particular criteria for personalisation, such as the EU’s Unfair Commercial Practices Directive which ‘mental or physical infirmity, age or credulity’ as factors of vulnerability, are underinclusive: “[i]n as much as personalisation techniques can predict psychological states and cognitive biases, every consumer is the potential subject of economic exploitation in the market, for example by paying higher prices.” While there have been calls to expand the legal definition of consumer ‘vulnerability’, Laux et al argue that the law will always be playing catch-up because “as the inferential analytics behind personalisation is a fast-moving field of technology, there is always a residual risk of being under-inclusive when redefining legal terms.”
So, they propose a new anti-trust remedy which regulators could mandate to address overly concentrated ‘pockets’ identified by the CAPI: adding noise to targeting algorithm and thus randomly diversifying the exposure to adverts by an average individual consumer:
“If consumers receive a certain amount of non-targeted, hence noisy, adverts, it allows them to see a wider range of offers. They could thus become aware of the existence, quality and prices of products and services which advertisers would not pay a premium to supply them with based on their consumer profile. It would also allow them to see adverts that other consumer groups are being served with and thus increase public oversight.”
Laux et al. acknowledged that a qualitative judgment needs to be made about how much noise to introduce. The ‘costs’ of adding noise are likely to be a loss of clicks compared to a scenario without noisy targeting, which escalates the more noisy advertisements are added. They also acknowledge it could all backfire: “if advertising budgets and publishers’ space for adverts remain the same, noisy targeting could also increase prices in auctions as advertisers now need to place more ads to use up their budgets.”
In the above example of Dreamy, MyMattress and SleepTight, Laux et al. model the impact on click through rates of the three of them with increasing level of noisy advertisements, depicted below:
At 100% noise, there is no personalised advertising. Dreamy, the incumbent was already advertising randomly to the whole population. MyMattress was targeting 50% of the population, so it now randomly advertises to 50% of the population. SleepTight was advertising to a targeted 10%, but it also now is advertising to a random 10%. At this very noisy end of the regulatory intervention, the click through rates of the targeted advertisers, MyMattress and SleepTight should revert to the non-targeted advertiser, Dreamy.
Laux et al say that this modelling “does not show a linear deterioration with increasing noise – the curve flattens out around the value of 40%.” They conclude that adding 40% noise would be a proportional remedy to a highly targeted pocket because it achieves a 96% decrease in the reduction of the variance of the group-level CAPI at the ‘price’ of a 14% and 29% click-through loss, which they argue “appears well-balanced, given the risks associated with certain consumers being trapped in a ‘targeting pocket’.”
Some of the claims made by Laux et al are bold to say the least, and would require further probing by researchers and policy making bodies before being adopted – the premise alone that increased personalisation of online advertising is distorting competition and reducing consumer welfare is likely to stoke controversy among academic economists. Online advertising is only one aspect of a consumer’s interactions with markets more broadly, and consumers would generally be aware of alternative suppliers through other means. Laux et al counter this by stating that advertising for some products, like online games, is almost entirely online and without online advertisement, the existence of particular products may be virtually undiscoverable to consumers.
Furthermore, there are several limitations to the methodologies and solutions proposed by Laux et al. For instance, the utility of the CAPI is limited to the extent that the market is properly defined (as it is also with the HHI) – the workability of the CAPI is premised on the fact that the relevant product and geographic markets to which the advertisements pertain can be neatly defined. It is also an open question as to how regulators would access the data, let alone have to ability or resources parse and influence it.
That said, it is generally well accepted that the AI revolution is posing a new range of issues, and while the exact solution proposed by Laux et al may not be the silver-bullet, it demonstrates the type of creative and novel thinking demonstrated in this article that will likely be required to meet these new issues and should be commended.
Authors: Owen Fischbein and Peter Waters