05/06/2024

The UK Government recently published its response to feedback on its 2023 White Paper on an AI regulatory framework for the UK, setting a post-Brexit course for a very different regulatory model to the EU’s recently adopted AI Act. The Ada Lovelace Institute described the UK Government’s proposed approach in politely oppositional terms as follows: 

“The government should be given credit for evolving and strengthening its initially light-touch approach to AI regulation in response to the emergence of general-purpose AI systems….However, much more needs to be done to ensure that AI works in the best interests of the diverse publics who use these technologies. We are concerned that the government’s approach to AI regulation is ‘all eyes, no hands’: it has equipped itself with significant horizon-scanning capabilities to anticipate and monitor AI risks, but it has not given itself the powers and resources to prevent those risks or even react to them effectively after the fact.”

A Conservative peer has introduced proposed legislation in the House of Lords to establish a statutory body, the AI Authority, to address this perceived gap in AI-specific regulatory powers.

Where is the UK Government coming from?

The UK Government’s approach to AI is against the background of the 'alphabet soup' approach the UK has traditionally taken to regulation, setting up a plethora of sector-specific regulators such as Ofcom for communications, Ofwat for water, and Ofgem for electricity and gas, unlike in Australia where a single regulator, the ACCC, has both cross-sectoral and sectoral functions in a regulatory version of 'Boy Swallows Universe'.

The UK Government’s proposed approach to AI governance is to rely on these decentralised, sectoral regulators bolstered by 'central' functions to provide them support, coordination, and coherence in AI decision-making in their sectors. The UK Government says that consistent with its “commitment to lead the international conversation on AI governance [by demonstrating] the value of our pragmatic, proportionate regulatory approach”, no new legislation is contemplated (at least initially) because:

“New rigid and onerous legislative requirements on businesses could hold back AI innovation and reduce our ability to respond quickly and in a proportionate way to future technological advances.”

As the starting point, the UK Government recognised that there are compelling policy reasons to act, and with urgency, on putting an AI regulatory framework in place to:

  • drive growth and prosperity by making responsible innovation easier and reducing regulatory uncertainty. The UK Government recognised that "some AI risks arise across, or in the gaps between, existing regulatory remits..[i]ndustry feedback was that conflicting or uncoordinated requirements from regulators create unnecessary burdens and that regulatory gaps may leave risks unmitigated, harming public trust and slowing AI adoption".
  • increase public trust in AI by addressing risks and protecting fundamental values. The UK Government recognised that this required much more than addressing cyber risk and misinformation because "not all AI risks arise from the deliberate action of bad actors..[s]ome AI risks can emerge as an unintended consequence or from a lack of appropriate controls to ensure responsible AI use".
  • strengthen the UK’s position as a global leader in AI, hanging onto the UK’s position as the third-ranking destination for AI investment.

The UK Government took a sideswipe at the EU’s AI Act – not only for its more heavy-handed regulatory approach but the whole notion of a 'risk-based' approach to categorising AI applied broadly across AI models and the economy:

“We will not assign rules or risk levels to entire sectors or technologies. Instead, we will regulate based on the outcomes AI is likely to generate in particular applications. For example, it would not be proportionate or effective to classify all applications of AI in critical infrastructure as high risk. Some uses of AI in critical infrastructure, like the identification of superficial scratches on machinery, can be relatively low risk.”

The UK Government describes its approach as "context-based", allowing a weighing of "the risks of using AI against the costs of missing opportunities to do so, [including]..that AI risk assessments should include the failure to exploit AI capabilities". Its thesis is that this more granular analysis is best undertaken by the sectoral regulators using their industry expertise than by a more remote, centralised AI-specific regulator, which could end up overly focused on AI risk at the cost of innovation and opportunity.

The proposed UK regulatory regime

Recognising there is no consensus on "what is AI", the UK Government proposes to define AI by reference to two characteristics:

  • The 'adaptivity' of AI can make it difficult to explain the intent or logic of the system’s outcomes:
    • AI systems are 'trained' – once or continually – and operate by inferring patterns and connections in data which are often not easily discernible to humans.
    • Through such training, AI systems often develop the ability to perform new forms of inference not directly envisioned by their human programmers.
  • The 'autonomy' of AI can make it difficult to assign responsibility for outcomes:
    • Some AI systems can make decisions without the express intent or ongoing control of a human.

Existing sectoral regulators will be expected to apply 5 values-focused cross-sectoral principles in their work, which the UK Government says it has strengthened in response to feedback:

  • Principle 1 - safety, security, and robustness: while noting that "[s]afety will be a core consideration for some regulators [such as in health services] and more marginal for others", all regulators still must undertake, and periodically repeat, a specific risk assessment of AI in their jurisdiction. 'Risk' should not only cover cyber-security risks and misinformation but also whether the AI model functions as intended and described. Risk mitigation also focuses not only on design and development, but the whole lifecycle, and regulators may need to issue guidance requiring lifecycle actors to regularly carry out evaluations of AI models they have built or are using.
  • Principle 2 - appropriate transparency and explainability: sectoral regulators should ensure they have access to sufficient information for them to make decisions about individual AI models, including direct access to data inputs used for training. People impacted by AI also should have access to enough information about the use of AI and how it works to ensure they can exercise their rights, including 'nutrition' labelling. That said, the UK Government recognised that this was a 'work in progress':
    • “The logic and decision-making in AI systems cannot always be meaningfully explained in a way that is intelligible to humans, although in many settings this poses no substantial risk. It is also true that in some cases, a decision made by AI may perform no worse on explainability than a comparable decision made by a human. Future developments of the technology may pose additional challenges to achieving explainability.”
  • Principle 3 - fairness: the UK Government noted that while 'fairness' is a general principle already embedded in many laws administered by the sectoral regulators, they will need to go the next step of providing specific guidance, including case examples, of how the fairness principle will apply to AI models oversighted by them. Sectoral regulators will also need to take into account how fairness is defined outside their remit, such as non-discrimination.
  • Principle 4 - accountability and governance: the UK Government observed that AI governance for businesses using AI is a major, complex challenge precisely "because AI systems can operate with a high level of autonomy, making decisions about how to achieve a certain goal or outcome in a way that has not been explicitly programmed or foreseen". Sectoral regulators should guide the responsibilities that AI life cycle actors have to demonstrate proper accountability and governance, and that will probably require the sectoral regulators to specify a combination of tools and approaches, including impact assessments, adapting emerging industry standards, and reporting to the regulator.
  • Principle 5 - contestability and redress: users, impacted third parties and actors in the AI life cycle should be able to contest an AI decision or outcome that is harmful or creates a material risk of harm. While the non-statutory based approach taken by the UK Government means that there will not be any new appeal rights in respect of AI-informed or AI-based decisions, the UK Government says that it expects sectoral regulators to clarify "existing routes to contestability and redress, and implement proportionate measures to ensure that the outcomes of AI use are contestable where appropriate".

In response to criticisms about the lack of binding force to its proposal, the UK Government proposes, in an unhurried fashion, to legislate a duty of regulators to take these 5 principles into account "[f]ollowing a period of non-statutory implementation, and when parliamentary time allows".

To help 'all boats rise with the tide', the UK Government proposes a range of 'central functions' to educate, support, and monitor the AI activities of the sectoral agencies. The proposed structure is depicted as follows:

The UK Government says that the proposed centralised monitoring functions ('the eyes' as described by Ada Lovelace Institute) is "at the heart of our iterative approach". The monitoring functions are to:

  • promote consistent application of the 5 principles across regulators, although the UK Government notes that "[s]ome variation across regulators" approaches to implementation is to be expected and encouraged, given the context-based approach that we are taking".
  • identify barriers sectoral regulators are facing in implementing the principles, which could be a lack of power requiring legislative change or a lack of capability requiring training and resources.
  • identify where issues fall between the regulatory gaps between sectoral regulators or do not have an obvious regulatory home.

There also would be 'cross-cutting' central functions to mitigate the regulatory fragmentation:

  • building sandboxes to test compliance with more than one set of sectoral regulators.
  • maintaining a cross-sectoral register of risks.
  • helping AI innovators "to navigate regulatory complexity", essentially providing an escalation point where individual sectoral regulators are acting inconsistently or are troublesome.

Finally, the central functions would undertake 'horizon scanning' to monitor emerging trends and opportunities in AI development to ensure that the framework can respond to them effectively.

In response to criticism that the central functions would be toothless, the UK Government has said that it will put more resources into those functions, such as the AI Safety Institute. But other than the modest adjustment to eventually legislate the 5 principles, the UK Government remains firmly committed to a non-statutory approach which it says is a "deliberately agile and iterative approach".

The House of Lords approach

The Artificial Intelligence (Regulation) Bill [HL] is a private member’s bill proposed by Lord Holmes of Richmond. Essentially, it takes the UK Government’s proposed decentralised approach but puts an independent statutory authority at the centre of the web (to 'provide the missing hands' as Ada Lovelace Institute would put it).

The Bill:

  • encodes the UK Government’s 5 principles in statute but also adds some interesting additional values:
    • AI must meet the needs of those from lower socio-economic groups, older people, and disabled people; and
    • AI must generate data that are findable, accessible, interoperable, and reusable.
  • gives the AI Authority the central functions in the UK Government’s proposal but also includes setting up and administering an accreditation scheme for independent AI auditors.
  • requires AI developers to supply the AI Authority with information about the third-party data used in training and an assurance that IP consents have been obtained.
  • requires developers to provide "clear and unambiguous health warnings, labelling and opportunities to give or withhold informed consent in advance".
  • requires any business that develops, deploys or uses AI to allow independent third parties accredited by the AI Authority to audit its processes and systems.
  • provides for fines and penalties for non-compliance.

Read more: A pro-innovation approach to AI regulation

""