Recent UK research sheds light on sophisticated online targeting of vulnerable groups for high cost credit loans (e.g. payday loans).
Online targeting gets a bad rap – but without it we would be lost online. Automated systems now make decisions about a significant proportion of the information people see online. Online targeting is part of many companies’ core business models. It can be used to track people’s health, finances and conveniently filter information to save time for consumers. A Centre for Data Ethics and Innovation Report found that consumers do not want to prevent online targeting, but want higher standards of accountability, transparency and meaningful control.
Targeted advertising has been with us for a long time. In the offline world, targeting was based on the likely readers of particular print media or viewers of particular TV shows, rather than personal information.
Online targeting platforms collect breadth and depth of data iteratively, enabling them to more accurately analyse, infer and influence people’s online behaviours – it targets YOU. Offline the process to ‘buy’ (e.g. completing the printed credit card application and mailing it in) can be a form of consumer protection – it is a more reflective process with time to change your mind. Online targeting can amplify consumers’ impulsive responses – their “first order preferences”. For example, experiments have shown that online targeting platforms inferring psychological traits from social media data (such as extraversion) can result in a 40% increased likelihood of consumers clicking on ads and a 50% increased likelihood of buying.
The UK research showed just how blunt and overwhelming these tools can be in the high-cost lending market:
- Over the past two years there have been 13 billion adverts or other digital material for “payday loans”.
- Debt consolidation brokers use targeted keywords such as “gambling debt”, “wiping debt”, “all credit types welcome” to attract vulnerable consumers with poor credit scores to view and apply for high-cost loans.
- One provider conducted 430 separate ad-tech campaigns on one day.
- Online forms that misleadingly look like online enquiry are actually application processes.
- The ‘small print’ terms and conditions are usually positioned below the onscreen ‘fold line’.
- The ‘cowboys are being out-cowboyed’ – vulnerable groups are at risk of unknowingly using unauthorised cloned firms and believing fake social media accounts on Twitter or Reddit and falling victim to fraud when applying for credit.
But not necessarily the ugly…
Governments, regulators and stakeholders have begun to explore potential solutions to heighten scrutiny and transparency of online targeting and to promote user empowerment. Some of the remedies proposed in the UK are:
- An Online Targeting Code of Practice to put some boundaries around the targeting techniques.
- Empowering the regulator to appoint independent third-party auditors to provide an ongoing monitor of compliance.
- Requiring online advertising platforms to maintain advertising archives for a specified period of time detailing how they target consumers, especially advertising related to age-restricted products, credit and housing as they have wider societal harm implications. There is existing precedent for such an approach - Facebook unveiled an “ad library” in 2018 in response to criticisms over its lack of transparency in using online targeting in political campaigns.
- Creating data intermediaries – data representatives – to promote user empowerment. These third-party intermediaries would advocate for the interests of individual users, create standardised user controls and could be set up as trusts with fiduciary responsibilities to their members’ interests. Query however whether such an approach would work effectively in high cost lending markets where vulnerable groups do not necessarily recognise they are at risk of online targeting.
Clearly, the future trend globally on ad-tech is for an increasingly interventionist regulation approach. But what will that regulation look like? The UK’s approach is to set upfront ‘guard rails’, which platforms can use in their day to day business, in preference to an approach of a vague ‘fairness’ standard coupled with substantial enforcement powers.