10/05/2022

The EU is proposing to adopt one of the world’s first economy-wide legislative frameworks governing AI development (the AI Act or AIA). The AIA will cover not only AI developed in the EU but also AI developed anywhere in the world which is used in the EU. Like the EU data protection law, the GDPR, the EU is explicitly positioning the AIA to become the global model for regulating AI.

However, a recent paper by the Ada Lovelace Institute, cautions that although the AIA is “an excellent starting point for a holistic approach to AI regulation…there is no reason why the rest of the globe should unquestioningly follow an ambitious, yet flawed, regime held in place by the constraints…[of the EU’s policy and legislative processes]”.

The paper lays out 4 criticisms of the AIA and 4 ways to fix those.

The four problems

First, the Ada paper argues that the AIA misshaped as a result of its misbegotten parentage in EU product safety laws. The legislative power of the EU vis-à-vis its Member States is confined to a specific set of areas, and the EU used the ‘hook’ of legislating on EU-wide product safety issues for the AIA. The result is that, as the Ada paper says:

“[the AIA] largely conceives of AI ‘providers’ as the equivalent of the manufacturers of real-world products like dishwashers or toys. For these kinds of products, it is indubitably the initial manufacturer who is the person who knows best how to make the product safe.”

Regulating AI like a ‘dishwasher’ does not work because:

  • AI is a dynamic, not a static product: AI continues to evolve after it leaves the developer’s control through the data it is fed by the user, what it learns from that data and adjustments made by the human employees in the loop;
  • AI often is not produced by a single entity. This changes the question of who is in scope of legal obligations, and who should be accountable, for different parts of the AI lifecycle. Increasingly, specialist or smaller AI developers uses generic or standardised inputs from large providers, such as Google and Amazon;
  • AI can be used across a range of very different uses: e.g. a facial recognition AI can be used for security work in prisons or airports or to surveil shoppers for target advertising;
  • AI may not be a standalone appliance, but can be part of a much bigger platform or system.

This all points to the downstream users needing to share some responsibility with the manufacturer, the reverse of product safety law: unlike people who buy and use dishwasher who need to be protected against the manufacturer, not be partly responsible with the manufacturer in how the dishwasher functions.

While the Ada paper says the AIA tries to tinker with the product safety model of manufacturer responsibility, “the Act fails to take on the work, which is admittedly difficult, of determining what the distribution of sole and joint responsibility should be contextually throughout the AI lifecycle, to protect the fundamental rights of end users most practically and completely.”

Second, the Ada paper says the AIA gives no rights to end users, which at first might seem paradoxical given its consumer product safety parentage. But product safety law treats end users as “objects which are impacted” and to be protected, not the subject of rights they can proactively assert, which is more the model in human rights or data protection laws. This ‘don’t worry, we know best how to protect you’ approach means that:

“[the AIA] does not consult users at the very start when providers of ‘high risk’ AI have to certify that they meet various fundamental rights requirements, even though the users will suffer potential impacts; does not give users a chance to make points when unelected industry-dominated technical bodies turn democratically made rules into the standards that actually tell companies making AI how to build it; and, most importantly, does not allow users to challenge or complain about AI systems down the line when they do go wrong and infringe their rights.”

Third, in a particularly stinging rebuke, the Ada paper says ‘[t]he alleged ‘risk-based’ nature of the Act is illusory and arbitrary'.

The way the AIA works is that AI designated as ‘high risk’ must meet technical and regulatory requirements before the system can be brought to market, including checks against biases in data sets, using prescribed data governance processes, ensuring the ability to verify and trace back outputs throughout the system’s life cycle, incorporating acceptable levels of transparency and explainability of the AI.

There is a hodge podge definition of ‘high risk’ AI which covers AI used as a safety component of a product, AI covered by one of 19 specified pieces of EU single market harmonization legislation (e.g., aviation, cars, medical devices), with a laundry list of deemed categories tacjked on the definition, including administration of justice and asylum seeker processing.

The Ada paper says of this definitional quagmire:

"These lists are not justified by externally reviewable criteria, and thus can only be regarded as political compromises at one point in time – leaving it difficult-to-impossible to challenge the legal validity of AI systems in principle rather on point of detail…. if it is uncertain why certain systems are on the red or ‘high-risk’ lists now, it will be difficult-to-impossible to argue that new systems should be added in future."

The sting in this criticism is that AIA lacks the very transparency and explainability which it demands of AI.

Fourth, the Ada paper argues that the AIA is not ambitious enough at assessing and seeing off the risks caused by AI. While high risk AI require an ‘essentials requirement’ certification before use, there are no ex ante requirements against which to assess AI outside the high risk category, which the Ada paper says will be most AI consumers encounter on a daily basis (such as search engines). The assessment requirement for high risk AI also do not sufficiently address human rights issues.

To the extent the AIA does deal with human rights, it focuses on individual rights and ignores disadvantaged groups. The Ada paper says:

“Much scholarship in the human rights domain has argued that concentrating only on individual rights – as in the conventional ECHR human rights structure – leaves crucial gaps in relation to common and minority interests, and allows structural discrimination to persist and grow. Individual rights tend to empower those who are already most empowered to exercise their rights and fail to support marginalised and socio-economically impacted communities."

The four fixes

First, the Ada paper recommends revision of the AIA to provide that providers and deployers (i.e. business users) of general-purpose AI should share responsibility for assessing its conformity with fundamental rights and with the safety standards essential requirements, without need to prove the deployer has made a ‘substantial modification’.

The Ada paper acknowledges that “a much more nuanced appraisal must be made of what duties should lie where at what point in time, and who is empowered either legally or by practical control, power or access to data and models, to make changes.” But the Ada paper says these kinds of judgments are already being made under the GDPR in relation to the roles of data controllers.

Second, those most affected by the impacts of high-risk AI systems – both individuals and groups – should have input at the time when those systems are certified and in the industry standards setting process. They also should have the right to complain about any AI to a national regulator or an EU wide AI ombudsman and there would be representative actions, as under the GDPR.

Third, there should be a clearer principles framework for the definition of ‘high risk’ AI and more broadly for other classes of AI which attract lesser sets of obligations under the AIA. Article 7 of the AIA sets out some criteria, and the Ada paper suggests these be used as a starting point – without much guidance as to how the broad language could be remodelled to bring more clarity and certainty than the ‘laundry list’ approach it criticises.

Fourth, the current certification process should be renovated into a more robust ex ante impact assessment process. This should consider group and societal values as well as fundamental rights and environmental impacts, include mechanisms for user group feedback, and be made public to encourage accuracy, enable scrutiny and provide templates for other providers.

If there were not the constraints of the EU’s legislative framework (which limit the EU rulemaking to the AI which present product safety threats), the Ada paper says this amended framework should be applied on a layered basis across the full range of AI, for example categorisation as ‘prohibited/high/limited/minimal risk’ AI.

 

Read more: Expert opinion: Regulating AI in Europe

""