16/08/2022

While the UK promotes itself as a global leader in AI, a recent study by the Alan Turing Institute found most regulators badly unprepared for “the growing set of challenges and opportunities posed by the emerging tidal wave of AI innovation.”

The report sees AI as presenting a two-sided challenge to regulators:

  • Regulation of AI: ensuring that regulatory regimes are “fit for AI” is key to preventing AI-related harms, but also to promoting AI innovation by providing regulatory certainty and building public trust.
  • AI for regulation: identifying opportunities to use AI to make the ways in which regulators pursue their missions more effective and more efficient.

The current state of lack of AI readiness

The researchers set out to build a survey tool that to track the AI readiness of all UK regulators, but they soon ran into the problem that there was no single list of regulators. They had to build their own list of regulators, which ranged from large, well-resourced bodies like the Competition and Markets Authority, the Civil Aviation Authority and the Bank of England, to smaller more mission-specific regulators such as the Human Fertilisation and Embryology Authority and the Gangmasters and Labour Abuse Authority, through to the more quaintly British agencies such as the Farriers Registration Council.

Through a combination of in-depth interviews of a representative range of large, medium and small regulators and piloting of the survey tool, the researchers drew the following conclusions:

  • regulators – whether small, medium, or large - consistently identified significant readiness gaps in AI readiness across all their systems, organisation and individual staff members.
  • in particular, perceptions of a lack of organisational readiness figured most prominently – what the researchers called ‘absorptive capacity’: the organisational ability to draw upon a strong knowledge and skills base about AI, to use and assimilate new knowledge related to AI into existing practices and capabilities, and to have accessible and established mechanisms for sharing and disseminating knowledge about AI throughout the organisation.
  • the regulators themselves largely put this down to an absence of strong senior leadership. Individuals in leadership roles were seen to require technical and socio-technical upskilling to bolster their cognitive participation and to cultivate change readiness.
  • regulators saw an urgent need to develop strong mechanisms for inter-organisational cooperation between sectoral regulators to promote collective learning, resource pooling, and skills development.

The challenge of ‘Regulation for AI’

The report framed the challenge ‘regulation for AI’ being “the possibilities of using AI seem limitless, challenging regulators with limited resources to effectively oversee a vast landscape of use cases.”

This plays out in the UK regulatory environment in the following ways (with echoes for Australia).

First, in the AI regulatory equivalent of Brexit, the report noted that the UK and the EU are taking divergent approaches to institutional arrangements to regulate AI:

“While the EU has proposed harmonised rules for AI regulation, the UK’s current approach is to regulate AI technologies and services through existing regulators. This means that more and more vertical regulators are being put under pressure to understand where and how AI is being used within their remits and to anticipate the various consequences and risks associated with this. Such an unprecedented demand for technical know-how and horizontal problem-solving can pose significant difficulties for regulators, particularly those who have not traditionally engaged with new and rapidly evolving technologies.”

Second, this ‘alphabet soup of regulators’ faces cross-jurisdictional challenges unlike with any previous technological change because:

“Companies using AI often function across traditional sectoral boundaries, and uses of AI may have impacts which fall within the remits of more than one regulatory body. Therefore, regulators must collaborate to ensure consistent, complementary, and effective regulation.”

Third, as a result, there is an obvious risk of gaps or inconsistencies in regulatory approaches due to the siloed nature of the regulatory landscape. This is more than regulators just not communicating with each other about how they are regulating AI, but they are not working off the same baselines because:

  • regulators do not have a common language around AI:

“Although there are obviously differences between different regulators in terms of scope and remit, there must be some common requirement for non-specialists to understand [AI], to get people speaking the same language if nothing else, because that’s one of the major problems at the moment, people are talking about the same thing but in different ways.”

  • while individual regulators have specific expertise relating to their particular sectors, they may not be well-equipped to anticipate and identify the various risks of AI technologies for the very reason that these risks cut across traditional sectoral boundaries:

“Currently risks and potential or actual harms are all-too-often characterised as unique, individual risks. This overlooks the system-wide and structural nature of risks posed by AI innovation.”

Fourth, traditional regulatory mechanisms of auditing, enforcement, and oversight can easily be outmatched by the complexity and speed of AI-enabled behaviour and the sheer amount of data that flows through high-traffic digital platforms. A challenge for the best resourced and most sophisticated regulator, like the CMA, most small and medium regulators are simply overwhelmed.

Lastly, the researchers were of the view that, “in identifying and addressing risks relating to AI, it is important to take an anticipatory approach in order to ensure that regulatory responses are fit for purpose not only in relation to current applications of AI, but also to future uses.” However, this more ex ante approach requires regulators to have a depth of technical know-how and foresight about AI, which again is challenging to achieve in a siloed, fragmented regulatory landscape.

The opportunity of ‘AI for regulation’

The researchers found that while there is growing interest in ‘AI for Regulation’ across the regulatory landscape, most regulators do not yet have substantial capabilities in this area. As discussed below, this is not just about improving how regulators work, but goes to their credibility: if you don’t know how to use AI how can you regulate it?

There are some notable exceptions amongst UK regulators in using ‘AI for regulation’. The CMA’s DaTA unit has developed machine learning tools to identify possible breaches of consumer law on digital platforms and using natural language processing. In June 2022, the CMA held a data analytics conference, at which the ACCC Chair, Gina Cass Gottlieb spoke. Risk scoring of regulated individuals and entities is used as a ‘pro-active tool’ in the UK for driving instructors, aged care homers, GPs, financial advisers, and hygiene in restaurants.

The report identified some other possible ‘AI for regulation’ uses:

  • Modelling to inform rulemaking or regulatory approval. The use of ML and previously unused forms of data can result in models whose improved accuracy, reliability, or granularity enables improved decisions in such cases: e.g. assessing the toxicity of chemical compounds.
  • Modelling and simulation for scenario analysis. In cases where regulators need to understand potential market dynamics or other aspects of future scenarios, novel approaches to modelling and simulation enabled by AI can provide innovative insights. In merger analysis, framing the reasonably likely or possible outcomes that should form the counterfactual is a particularly hard issue (though how the ACCC or a litigating party would convince a Federal Court judge that AI is better at it than the judge is not immediately apparent).
  • Analysing document content. AI tools can also perform more sophisticated forms of analysis on the content of documents. Intellectual property agencies could use AI tools to search for similarities between the content of patent or trademark applications and existing patents or trademarks to determine the merit of applications or to categorise their content.
  • Monitoring regulated behaviour. Where regulators have access to observational data concerning the conduct of regulated entities, AI tools can be used to monitor behaviour and detect instances of non-compliance or misconduct: for example instances of illegitimate “company phoenixing” (based on company registry data and network analysis). The Danish competition regulator has developed a tool to detect bid digging in public procurement contracts.
  • Monitoring indirect sources of information. Taking a leaf out of the social media companies’ playbook, AI tools can be designed to derive insights and detect relevant signals in such sources: e.g. a food safety authority screening social media posts for signs of food poisoning incidents related to individual restaurants or financial regulators using AI tools to monitor public sentiment in relation to individual supervised firms.

What makes a regulator ‘fit for AI’?

The report, in language which is a little ‘management consulting-speak’, identified three key building blocks of AI capability within a regulator:

Innovation-values-fit. Regulators must make efforts to develop and strengthen the values, beliefs, and purposes within their regulatory missions that can underwrite a pro-innovation stance on AI adoption and an openness to the accompanying changes in policies and practices. In other words, regulating AI may require a change in the substantive policies and principles which guide a regulator’s decision making: for example, last week we discussed whether competition law needs to be digitally disrupted.

Innovation-needs fit. The successful uptake of disruptive innovation and policy change is affected by the degree to which their characteristics align with the administrative and practice needs of users and the service needs of individuals affected by their implementation. Back to regulators’ credibility in regulating AI, regulators need to ‘practice what they preach’ by becoming users of AI themselves:

“For regulatory bodies to maintain integrity and public credibility, it is arguably necessary for there to be a deeper join-up and coherence between a body’s external-facing AI-related policy stances and its internal practices around the use of AI. This is especially true in light of unavoidable questions of governance and good practice the answers to which are not determined by the letter of any applicable laws and regulatory rules.”

Innovation-knowledge fit. regulators must make efforts to upskill their workforce from tip to toe of their organisations and from the bottom up. Such comprehensive upskilling efforts should involve professional development and training to expand technical knowledge, but they should also involve a socio-technical and ethics component whereby an awareness of the social, moral, and policy stakes of AI.

How to solve the ‘alphabet soup’ problem

Much like the recent ANU report on a new digital regulatory framework for Australia, UK regulators were unanimous in their view that the answer was not a single economy-wide AI regulator. They also considered that a co-ordinating or resource unit in the executive government was not appropriate because of the potential threat to regulatory independence.

On the other hand, regulators also considered that current UK inter-regulatory consultation groups on digital regulatory issues fell short of what is needed. Earlier this year, the ACCC, Australian Communications and Media Authority, the Office of the Australian Information Commissioner, and the Office of the eSafety Commissioner formed the Digital Platform Regulators Forum.

However, this report suggests that the silos between regulators would not be broken down unless the collaborative efforts were ‘impactful’, rather than just being talk shops.

The potential for a neutral intermediary role to be fulfilled by an independent research organisation was favoured by UK regulators. Academic institutions were considered to represent a “safe space” in which regulators could engage in open discussions, which would be valuable for informing and refining thinking.

The report recommended the establishment of an AI and Regulation Common Capacity Hub that regulators could draw on. The report gave the example of PEReN, a French national competency centre that provides data science capacity for regulatory bodies in relation to the regulation of digital platforms.

For all the compelling description in the Turing report of the depth of the regulatory challenges and opportunities of AI, its recommended outcomes seem modestly incremental. While the Turing report has much in common with the recommendations of the ANU report, including a resourced co-ordination body, the ANU report paints a bigger picture because it recognises that:

  • given the potential reach of AI applications, an AI regulatory framework needs to involve executive government agencies as well as independent regulators, although with some institutional separation in the ANU model from regulators to maintain their independence from the executive government; and
  • given the transformative impact on society and the economy, the highest levels of Government, through the federal cabinet, needed to formally involve in the framework. While AI gives rise to issues legitimately falling within the remit of regulators, there also needs to be recognition that there are larger policy, social and political implications of AI beyond that remit which should be more properly addressed in the political process.

 

Read more: The Alan Turing Institute | Common Regulatory Capacity for AI

""