The UK Labour Government has announced its intention to abandon the previous government’s light-handed AI regulatory approach and introduce “appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”.

As its contribution to this rethink, a recent Ada Lovelace Institute report (Smakman and Davies) draws lessons from pharmaceutical, financial services and climate regulation.

Lesson 1:

AI regulation will require well-resourced independent institutions with adequate statutory power to operate effectively, strategically and with flexibility over the long term.

The Lovelace paper torpedos the rationale underlying the previous UK Government’s ‘no AI legislation yet’ approach:

  • The previous PM Rishi Sunak argued we cannot “write laws that make sense for something we don’t yet fully understand”. The Lovelace paper flips Sunak’s argument on its head: to gather evidence can require regulation (for example, to require AI developers to report) – so waiting for evidence to act is not the optimal approach. They also point out that there is never a perfect, off the shelf regulatory model, and that regulation inevitably will develop over time, sometimes in response to major harms, such as drug approval processes in response to thalidomide and financial services regulation in response to the Global Financial Crisis. More modern forms of regulation, such as climate mitigation, also shows it is possible to go beyond simply responding to past crisis and design regulation from the outset to be iterative and flexible.

  • The previous UK Government was concerned AI regulation would disadvantage the UK in global competition for AI development. Again, the Lovelace paper points out that the opposite has been the case in other regulated sectors. The UK was the first to enact global climate mitigation regulation, establishing a baseline which other countries followed, while in contrast, the UK deregulation of financial services to make London the largest financial sector in the world contributed to multiple financial crises. In any event, major US states and the EU are now implementing tough AI regulation and given the UK is a comparatively small AI services market, “it is unlikely that there will be much benefit for the UK in setting out a framework for AI regulation that is significantly weaker than that set by its peers”.

The Lovelace paper’s key takeaways from the compared sectors for a new UK AI regulatory framework are:

  • Emulate the innovative approach of climate change regulation in balancing certainty and flexibility: set clear high-level goals, underpin them with practical targets, establish a monitoring infrastructure to track progress against those targets, empower regulators to respond to the evidence by changing the targets without having to wait for legislative change to catch up.

  • Safeguard the independence of AI regulators from political interference because, as happened with climate, the current consensus around acting on AI safety will ebb and flow.

  • Lean more heavily on experts by establishing a body like the Climate Change Committee comprised of experts to provide parliament and the public with a disinterested assessment of government policy and progress towards it regulatory goals.

  • Learn from the regulatory capture risks faced by pharmaceutical and financial regulators in regulating firms which are ‘too big to fail’. The slashing of the UK drug regulator’s funding means it is vulnerable to pressure from drug companies to accelerate regulatory review to make a treatment available sooner. The Lovelace paper acknowledged the challenges of addressing the ‘revolving door’ between regulators and the regulated given the high salaries tech firms are prepared to pay for talent. Industry levies may provide a new funding source but could also increase industry leverage over regulators.

Lesson 2:

Building and maintaining confidence in critical services and technologies requires the implementation of assurance mechanisms that can demonstrate they are safe, reliable and trustworthy.

The Lovelace paper argues that, while AI regulators need a full armoury of ex ante and ex post powers, their public safety mission will be best promoted by strong ex ante requirements to prevent harms from AI occurring in the first place. Pharmaceutical regulation requires clinical trials of drugs and financial regulation applies a ‘fit and proper’ test to firms before they can enter the market and to their senior employees before they can be hired.

The experience in these other sectors shows that pre-clearance thresholds require ‘robust metrics’ independently defined and assessed by the regulator – no ‘marking your own homework’.

However, the Lovelace paper recognised that tough pre-clearance conditions can advantage large well-resourced incumbents over new entrants, contributing to market concentration. The learning from the pharmaceutical sector was that AI regulation should take into account economic considerations (such as market structure and prevailing business models) as levers affecting company behaviour, alongside regulatory requirements, rather than as a separate or siloed policy area.

However, it is unclear from the Lovelace paper whether this would lead to applying pre-clearance criteria differently between incumbent AI providers and new entrants or lowering thresholds that otherwise might apply for safety reasons and how all this squares with the Lovelace paper’s view on not combining safety and innovation objectives, as discussed next.

Lesson 3:

Sectoral regulators can be less effective if their objectives conflict with the goal of ensuring technologies, products and services are safe, effective and trustworthy.

The Lovelace paper argues a mix of safety and innovation missions places a regulator in the undesirable position of having to prioritise objectives. Choosing between interests of different stakeholders and ultimately making value-based, largely subjective decisions about how to trade off the conflicting objectives in individual cases.

While the UK drugs regulator’s primary objective is public safety, its objectives were expanded in 2021 to include encouraging innovation. Which some industry observers believe has resulted in the regulator focusing more on pharmaceutical industry needs over patient safety, especially in the context of accelerated pathways.

Lesson 4:

To help mitigate the risk of institutions becoming unduly influenced by particular interests, mature governance regimes should include an ecosystem of independent institutions that can hold each other accountable and act as effective checks and balances.

The UK drug regulator’s role is complimented by a research body, the National Institute for Health and Care Research, and a Ministerial policy advisory body, the Committee on Human Medicine:

“[w]hile these bodies are all part of a unified ecosystem, acting towards shared aims, [t]his distribution of power and accountability helps to ensure the resilience and integrity of the overall system even when individual parts are vulnerable to pressure from external stakeholders.”

The AI regulatory framework also should utilise emerging approaches to empowering ordinary citizens and civil society organisations to counterbalance the immense power of industry. For example, the UK financial regulators have established a Financial Services Consumer Panel, the Practitioner Panel, and the Small Business Practitioner Panel.

Lesson 5:

Post-market monitoring measures can help ensure risks of emerging technologies and sectors are better understood, prevented and mitigated.

The Lovelace paper says there “is a strong case for post-market monitoring of AI systems because their performance and behaviour can change with new data”.

The approach to post-release drug monitoring illustrates some pitfalls to avoid:

  • Regulators may be tempted to address risk by allowing public release on condition of additional investigation or other work being undertaken by the drug company, but an EU study shows only 47% conditions were satisfied within the required timeframe.

  • Voluntary reporting of harmful incidents is weak, with around 90% of adverse drug reactions going unreported in the UK.

Financial regulation and climate change mitigation provides more useful models for monitoring systemic risk in AI models post-release:

  • Financial institutions are stress tested for systemic risk using standardised, independently oversighted metrics.

  • Emission reductions are measured economy-wide by key sectors overseen by an independent expert committee that reports to parliament, with the legislated consequence that the government must take extra measures when targets are not met.

Post-release risk monitoring and mitigation will be enhanced if AI providers and governments have stronger accountability as found in the compared sectors:

  • The experience in pharmaceuticals shows that tort and contract law can be inadequate to address harm individuals suffer because of the difficulty in establishing fault, and this is likely to be the same with AI. A response in pharmaceuticals have been to establish non-fault compensation schemes, such as for blood contaminated with HIV and/or HCV during medical treatments.

  • Under climate change regulation, the government has a legal duty to act if targets are not being met and private citizens have brought legal proceedings to enforce, sometimes leading to revisions in the government’s net zero strategy.

  • The financial sector has a Financial Ombudsman Service “a similar function for consumer complaints about AI systems may provide governments and regulators with a useful source of information about where AI-enabled harms are occurring”.

  • Financial services executives are subject to an individual accountability regime which requires them to act with integrity, to be cooperative with regulators and share relevant information with them and to pay due regard to consumer interests, backed by significant penalties.

Conclusion

While a bit of a smorgasbord, the Lovelace paper draws some useful ideas from other regulated sectors.

While the point about vesting the same regulator with both AI safety and innovation objectives is a good one, it also seems the AI safety side of the regulator ledger is loading up, but what are we doing on innovation? AI seems to present smaller AI service markets (and Australia is much smaller than the UK) with opportunities for innovation which did not exist with earlier more vertically integrated digital technologies. Are we doing enough to exploit these opportunities?