29/05/2023

With the European Parliament close to passing the proposed Artificial Intelligence Act (AI Act), attention is turning to its implementation. The Ida Lovelace Institute recently released a discussion paper on civil participation in the development of the AI standards contemplated by the AI Act which will be crucial to ‘filling in the details’.

While the European Commission’s avowed approach is for the AI Act to set the global benchmark for AI regulation, the Lovelace discussion paper cautions policy makers outside the EU over the feasibility of using the model of product safety laws to regulate AI. The discussion paper’s criticisms of the AI Act are telling, but there is also a risk of ‘throwing out the baby with the bath water’, which we will return to at the end.

The clash of cultures between policy wonks and computer nerds

The AI Act represents the European Union’s attempt to set – for itself and globally – a comprehensive, single framework to regulate AI across the economy. At the core of the AI Act is the protection of fundamental rights of individuals and groups with the ultimate aim of ensuring AI ‘increas[es] human well-being’.

The AI Act takes the approach of framing broad principles and concepts for how AI is to be designed and used. But as the Lovelace discussion paper points out:

“Ambiguous instructions for software design can ‘conflict deeply with [. . .] [a] computer scientist’s mindset’, which relies on precision and clarity. This may make it difficult for AI providers – people or entities who develop AI or have AI developed for use under their name or trademark – to interpret and operationalise essential requirements, resulting in insufficient protections for fundamental rights and other public interests.”

The AI Act assumes that the detail can be filled in by standards and codes developed by technical standards bodies. But the Lovelace discussion paper says this approach gives rise to the following challenges:

“This is seemingly based on the assumption that standards development bodies are equipped to grapple with questions about human rights and other public interests implicated by the AI Act. However, standards development bodies typically rely on employees of large technology companies for their outputs and see minimal participation by civil society organisations and other stakeholders. This situation also creates the possibility that decisions will be made in companies’ best interests, even when they conflict with the public interest.”

‘If you are heading to Dublin, I wouldn’t start here’

A big part of the problem is that the European Commission has tried to shoehorn the AI Act into the model used for European product safety laws, which is the ‘constitutional hook’ for legislating on an EU wide basis rather than leaving AI regulation to the individual Member States.

The New Legislative Framework (or NLF) which applies to product safety laws limits legislation to defining “the results to be attained, or the hazards to be dealt with, [without] specify[ing] the technical solutions for doing so”. A legal presumption of conformity between an industry-developed standard and the broad legislative principles arises if the Commission has officially cited the standard. If there is no standard developed, providers can develop their own technical solution ‘in accordance with general engineering or scientific knowledge laid down in engineering and scientific literature’.

The Lovelace discussion paper has a harsh assessment of the AI Act:

  • ‘While the AI Act is structured as an NLF law, it diverges from EU institutions’ characterisations of the NLF in several consequential ways…. Essential requirements in the AI Act are ambiguous, potentially leaving them open to interpretation by [standards organisations]’. The Lovelace discussion paper gives examples: the overall level of risk to fundamental rights and health and safety following a risk mitigation process must be ‘acceptable’; training datasets must be assembled using ‘relevant design choices’; and high-risk systems must exhibit an ‘appropriate level of accuracy, robustness and cybersecurity’.
  • The main guidance provided by the AI Act to standards making bodies is that standards should reflect ‘state of the art’. However, the Lovelace discussion paper points out the circularity of this: the reason AI regulation is said to be needed is because there no consensus around how AI should be designed and behave.
  • More fundamentally, the Lovelace discussion paper questions whether a product safety model is appropriate to regulating AI for fundamental rights. It is one thing for the European Parliament to specify black and white safety tolerances, such as the maximum exposure time for hazardous chemicals, but “human rights law is far less amenable to quantification, and far more open to interpretation, than safety standards.”

Making the best of a flawed approach

The Lovelace discussion paper urges the EU policymakers to “explore strategies to boost civil society participation, while also exploring institutional innovations to fill the regulatory gap.” It makes a number of recommendations:

  • The range of and balance between stakeholders involved in the standards making committees should be expanded beyond the traditional approach. The AI Act requires consultation with the ‘usual suspects’ in making standards about consumer goods, including groups representing consumer rights, workers’ rights, environmental interests and SMEs. Stakeholders should be expanded to include civil society representatives with expertise in human rights law and public policy. There also should be a balance between commercial interests and other stakeholders on standards committees: “[w]hereas a lone [consumer] representative may be unable to influence a working group dominated by industry representatives who have voting rights a coalition of civil society representatives may be more successful.”
  • Civil society groups face high barriers to meaning participation in standards making processes. The Lovelace discussion paper recommends that the funding which the AI Act provides for stakeholder participation should be expanded to cover the wider set of stakeholders it recommends. Otherwise, there is a risk that the concerns of the funded civil society groups, such as environmental problems of energy consumption by AI, will overwhelm other civil society concerns, such as the role of AI in education.
  • The EU also should establish a civil society hub to support non-commercial inputs to AI standards making. The hub could provide technical expertise to enable those without technical backgrounds to better understand standards’ contents. This could be along the lines of the AI hub set up by the Alan Turing Institute. 

The AI Act empowers the European Commission, if it considers industry standards do not adequately protect fundamental right protections, to make its own common specifications instead. While this could provide a more structured, open process for civil society input, the Lovelace paper also cautions that the Commission process involves decision making by political appointees and civil servants, and the review by Member States of common specifications is usually by non-elected officials from trade and finance ministries.

Conclusion

Our take-outs from the Lovelace discussion paper are:

  • like much legislation today dealing with the regulation of complex, dynamic areas, the AI Act frames its requirements in very broad terms, and then places the burden squarely on AI developers and businesses to demonstrate that their AI complies with these broadly framed principles, with potentially significant consequences for ‘getting it wrong’ (or wrong in the regulator’s ex post analysis).
  • yet at the same time, the legislative process is unlikely to be able to keep up pace with AI development. The AI Act itself determines the extent of developer responsibility by categorising AI into risk levels based on their function (e.g. AI systems to influence voters in political campaigns as ‘high risk’). But new generative AI can be used across the whole economy, and often for purposes beyond the contemplation of the developers. Inevitably, for regulation to keep up, some more flexible model combining a legislative framework with a more agile delegated rule-making process seems necessary.
  • this then comes to the larger point being made by the Lovelace discussion paper. Given the fundamental nature of the rights to be protected and the likely ubiquitous and invasive role of AI in our lives, a higher level of democratic control is need than delegating rule-making to industry, bureaucracy or a regulatory agency operating at a distance from the democratic process. While we can reliably leave the definition of objective standards about exploding toys to bureaucrats and engineers, regulating AI is about who we are and who we will become.

Lastly, turning to the question of whether the rest of the world should follow the EU’s lead, the AI Act, for all its flaws, will be hard to ignore for a very practical reason. Most AI will be developed and deployed to operate across national borders, obviously by Big Tech but the same is also true of AI developed start-ups and smaller developers in the AI ecosystem. The AI governance task facing boards and management is complex and uncertain enough without wildly different regulatory regimes to stay on top of. The AI Act, like the GDPR before it, is likely at least to sketch the outlines of AI laws outside the EU, especially in smaller economies.

Read more: Discussion paper: Inclusive AI governance

""