11/06/2021

The adoption of AI around the world continues to increase, with the AI industry estimated to be worth AU$22.17 trillion to the global economy by 2030. AI’s potential to benefit society is immense, however its use and deployment, particularly in certain sectors (e.g. healthcare, transport), carries a high level of risk. Notwithstanding this risk, to date, regulation of the global development and use of AI has been slow, with the implementation of voluntary ethics guidelines, rather than laws (for example, the Australian Government’s AI Ethics Framework). Effective and safe use of AI will undoubtedly require a coordinated global effort, and the first step towards a mandatory legal framework for AI in Europe has arrived.

On 21 April 2021, the European Commission released its proposal for 'harmonised rules on artificial intelligence' (Proposal). The Proposal is the first proposed mandatory legal framework for AI globally.  If made into law, compliance with requirements for the development and use of certain types of AI covered by the Proposal will be compulsory.

This article outlines the key features of the Proposal, with a particular focus on the risk-based approach of the proposed regulatory scheme, and considers how the Proposal may impact Australia's current stance on regulating AI. 

Who does the Proposal apply to?

The Proposal targets the ‘provider’ of an AI system, being a specific natural or legal person who takes responsibility for placing an AI system on the market (regardless of whether that person developed the system). The Proposal applies to providers irrespective of whether they are established within the EU or in another country, so long as the output produced by the AI system is used in the EU. Any distributor, importer, third party or user will be considered a provider if they place on the market or put into service an AI system under their name or trademark or if they modify the intended purpose or make a substantial modification to an AI system already on the market.

The Proposal also applies to the ‘user’ of an AI system, being a natural or legal person, public authority, agency or other body under whose authority the AI system is operated, where the user is located in the EU or the output produced by the system is used in the EU.

Importers, distributors and manufacturers of AI systems also need to comply with certain requirements under the Proposal.

What types of AI systems does the Proposal regulate?

All AI systems fall within the scope of the Proposal, with ‘AI System’ defined broadly to mean software that is developed with machine learning approaches, logic and knowledge based approaches, or statistical approaches and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

A risk-based approach is utilised, which places different requirements on uses of AI that create an unacceptable risk, a high risk, and a low or minimal risk to health, safety and fundamental human rights. 

AI systems that are high-risk will be specifically regulated, while AI systems whose use is considered unacceptable will be prohibited.

For AI systems that are non-high-risk, the Proposal creates a framework for the creation of codes of conduct, which are intended to encourage providers of non-high-risk AI systems to voluntarily comply with the requirements that apply to providers of high-risk AI systems. Codes of conduct may also include voluntary commitments by providers in relation to, for example, environmental sustainability and diversity of development teams.

Prohibited AI systems

The Proposal expressly prohibits certain AI practices that pose an unacceptable risk, including AI systems that attempt to subliminally influence people or exploit vulnerabilities to materially distort their behaviour.

It also prohibits AI systems used by public authorities to create a ‘social score’ of individuals based on known or predicted personal or personality characteristics, where this score can lead to detrimental treatment of individuals unrelated to the context in which data was originally collected, or disproportionate to their social behaviour.

Finally, use of ‘real-time’ biometric identification systems in public (including ‘facial recognition’) are generally prohibited, with exceptions for law enforcement pursuing suspected criminals, searching for victims of crime, including missing children, or in preventing a threat to life or terrorist attack.

What is a “high-risk” AI system?

The Proposal generally classifies AI systems as “high-risk” if, in light of their intended use, they pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence.

If an AI system’s intended use falls into any of the following categories, it will be considered “high risk”: 

  • Biometric identification and categorisation of individuals;
  • Management and operation of critical infrastructure - including AI systems used as safety components in managing and operating road traffic and supply of water, gas, heating and electricity;
  • Education – including AI systems used to determine access of persons to educational or vocational training, or to grade exams;
  • Employment – including AI systems used in recruitment and promotion decisions;
  • Access to essential private and public services – including AI systems used to evaluate credit scores or creditworthiness of individuals, or to establish priority in the receipt of government benefits or to establish priority in dispatch of emergency services;
  • Law enforcement - including AI systems intended to be used by law enforcement authorities to detect ‘deep fakes’, to evaluate risk of person offending or reoffending or to determine reliability of evidence;
  • Migration and border control - including AI systems to verify authenticity of documents, or assess risk (including security risk) posed by persons intending to enter territory; and
  • Administration of justice - AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts.

Additionally, the Proposal distinguishes between systems that are stand-alone AI systems and AI systems that are safety components of products (such as AI systems used in cars or medical devices).  An AI system will be “high-risk” if it is intended to be used as a safety component of a product, or is itself a product covered by existing EU legislation, and such product is required to undergo a “conformity assessment” by an independent third party under existing product safety legislation to verify the product’s compliance with requirements of the Proposal.

For the stand-alone AI systems set out in the list above, a new compliance and enforcement system will be established, which, according to the Explanatory Memorandum to the Proposal, is set to include a comprehensive ex-ante conformity assessment through internal checks, combined with strong ex-post enforcement. This is seen as an appropriate solution for regulating these types of AI systems, given the early phase of regulatory intervention and the fact that the AI sector is novel and auditing expertise in this area is still developing.

AI systems developed or used exclusively for military purposes are specifically excluded from the scope of the Proposal, as this comes within the remit of other EU policies and treaties.

What are the key requirements of “high-risk” AI systems?

The Proposal includes key requirements for providers of “high-risk” AI systems, such as:

  • Data Quality: As the consistency and accuracy of outcomes generated by AI systems are determined by the quality of data it receives, the Proposal makes the training, validation and testing of data sets subject to specific data governance and management practices that must be relevant, representative, free of errors and complete, and take the characteristics of the AI system’s intended use-case into account.
  • Transparency: As the opacity of certain AI systems makes them incomprehensible to individuals, specific information and instructions must accompany high-risk AI systems, designed to ensure users can appropriately utilise AI systems and understand their outputs. Such information should include the characteristics, capabilities and limitations of the AI system, including its intended purpose, its level of accuracy and the risks associated with its misuse.
  • Record Keeping: High-risk AI systems must be designed and developed so that they can automatically record events (logs) while the system is operating to allow providers to monitor the operation of the system throughout its lifecycle, and to assess situations that may result in the AI system presenting a risk or leading to a substantial modification of its operation. 
  • Technical Documentation, Conformity and Registration: Providers must draw up technical documentation, demonstrate the high-risk AI system’s conformity with regulatory requirements and register the AI system in an EU database before putting the AI system into service.
  • Human Oversight: High-risk AI systems must be designed and developed in such a way that they can be effectively overseen by humans while the system is in use. Humans should be able to:
    • fully understand the capacities and limitations of the system and be able to monitor its operation, so that unexpected performance can be detected and addressed;
    • be able to correctly interpret the system’s output;
    • remain aware of the tendency of relying or over-relying on the output produced by the system;
    • decide not to use the system or otherwise disregard, override or reverse the output of the system; and
    • intervene in the operation of the system or interrupt and stop the operation of the system.

Transparency obligations for certain AI systems

The Proposal subjects certain AI systems (whether high-risk or not) to specific transparency obligations. Transparency obligations will apply where AI systems interact with humans, are used to detect emotions or determine association with (social) categories based on biometric data, or generate or manipulate content (‘deep fakes’).  

Specifically, the Proposal requires individuals to be notified when they are ‘interacting with an AI system’, for example chatbots, or when they are exposed to an emotion recognition system or a biometric categorisation system. The Proposal also specifically requires ‘deep fakes’ (where an AI system has been used to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic) to be labelled as artificially created or manipulated.

Is mandatory regulation of AI the right approach?

According to the EU Commission in the explanatory memorandum to the Proposal, AI should be a “tool for people and a force for good in society with the ultimate aim of increasing human wellbeing”, and robust regulation is needed to protect this notion.  While the regulation of AI systems must be adequate to ensure safety, the EU Commission also intends for the Proposal to be “innovation-friendly, future-proof and resilient to disruption”.

The Proposal seeks to encourage Member States to create regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversight, before the systems are placed on the market or put into service. The Proposal particularly promotes innovation of small-scale providers and users, for example by requiring Member States to give small-scale AI providers priority access to AI regulatory sandboxes, organise awareness-raising activities about the application of the Proposal, and establish dedicated channels of communication with small-scale providers and users. These measures are intended to give small-scale providers and users more time to ensure compatibility between their AI systems or operations and the Proposal.

Some critics have raised concerns about the Proposal, citing both its potential to limit the development of AI, and its continued allowance of ‘problematic’ AI practices. Certain NGOs such as Access Now and Civil Liberties Union for Europe believe the Proposal provides too much leeway for AI to be used in ways that pose risk to human rights– for example, the Proposal’s allowance of biometric surveillance for border control and police enforcement. The Proposal’s specific focus and regulation of a limited range of AI uses is also viewed by some groups as problematic, providing too much scope for companies to self-regulate.

On the other hand, industry groups, such as the Centre for Data Innovation, have decreed that the regulation could cripple potential growth for the EU’s nascent AI industry, causing it to fall behind the US and China. In 2020, the White House warned Europe to avoid overregulating AI to prevent rivals in less restrictive jurisdictions from gaining a competitive advantage. Google also previously cautioned the EU Commission against a new AI framework, similarly citing its adverse effects on innovation and competitiveness for European small and medium businesses. At the same time, the Computer & Communications Industry Association, while expressing support for the risk-based based approach adopted in the Proposal, cautioned against “unnecessary red tape” and stated that “regulation alone will not make the EU a leader in AI”.

As AI increasingly develops and impacts our daily lives, a delicate regulatory balance needs to be struck, to foster innovation without compromising the safety and rights of individuals, as we move ever closer to a bold, new future. Requirements such as human oversight in the Proposal clearly prioritise safety in the operation of high-risk AI systems. Retaining human control over AI systems could limit the potential uses of AI in the future but the balance here is very fine and it is understandable that a cautious, safety first approach is being proposed.

Why this matters in Australia

We note that the Proposal is still in very early stages. To become EU law, it needs to be passed through the EU Parliament and EU Council, which can take years. However, its introduction marks a movement on the global stage as to the approach to AI regulation.

Here in Australia, the Government is currently ‘exploring what the focus of Australian AI policy should be in the future’. In May 2021, the Australian Government announced a $124.1M investment in AI initiatives, as part of Australia’s ‘digital economy’ strategy. This follows the Australian Government Department of Industry, Science, Energy and Resources’ Discussion Paper, released in October 2020 calling for views on an ‘AI Action Plan for all Australians’. We have examined the Discussion Paper in more detail here. Notably, the AI Action Plan focuses on industry investment in and adoption of AI in Australia; it does not concern AI regulation.

Currently, the extent of any regulation surrounding AI in Australia is limited to the voluntary AI Ethics Framework, which includes 8 AI Ethics Principles, and is still being developed to include further guidance for organisations applying these principles. This approach is currently being piloted with a small number of organisations. Given that Australia is still consulting and testing the AI Ethics Framework, a mandatory code akin to the Proposal is likely still quite far off.

This lends greater relevance to the EU’s Proposal for Australia. The Proposal has the potential to apply to private and public sector organisations all around the world where the relevant AI system produces an output that is used in the EU. Given the potential global reach of the Proposal and that it is the ‘first of its kind’ in the AI space, it could shape the development and regulation of AI systems in modern society, including in Australia. The Proposal also represents an important development for stakeholders in the global AI industry, including regulators, developers, manufacturers and businesses seeking to deploy AI systems, particularly in high-risk sectors. Additionally, the EU’s position as a global leader in privacy reform (e.g. GDPR) suggests these new developments are likely to influence the regulatory stances taken on these issues all around the world, including Australia.

The cross-border nature of AI means consistent and coordinated regulation is ideal for encouraging broader investment in and uptake of AI technology, and fully realising the benefits that AI can provide to society.

""