13/06/2023

"Update: The EU Parliament passed its update to the draft EU AI Act on 14 June 2023. The Act will now be subject to Trialogue negotiations between the EU Parliament, the EU Council and European Commission

The Australian Government has released a Discussion Paper (Supporting responsible AI: discussion paper) seeking industry feedback to inform the government’s approach to regulating AI (whether through mandatory or voluntary mechanisms) to help support responsible AI practices and mitigate potential risks.

It is great to see this level of federal government engagement. However, the Discussion Paper illustrates the size of task ahead.

Among the questions posed by the Discussion Paper are whether there are potential risks from AI that are not covered by Australia’s existing regulatory framework, whether there is support for a risk-based approach to regulating potential AI risks or whether there is a better approach, and whether any high-risk AI applications should be banned completely.

These are big questions, and come in the context of increased public concern around the potential of AI, particularly in light of recent publicity around generative AI such as ChatGPT and other large language models (LLMs) and multimodal foundation models (MfMs), together with diverging approaches to regulating AI being explored by jurisdictions around the world. In the European Union, the EU Parliament is set to vote later this month on its amendments to the draft EU AI Act, a new, centralised AI regulation, with specific requirements and assurance processes that apply to AI systems following a risk-based approach. Similarly, a new, general, risk-based approach has also been proposed in Canada.

Conversely, the UK announced earlier this year its proposal for a “pro-innovation, light touch” approach, leveraging existing regulators to regulate the use of AI in their sector, supported by a central coordination and oversight function. Other jurisdictions have proposed use-case specific regulation, such as the US Food and Drug Agency in relation to AI in medical devices or the New York City bias audit law which focuses on algorithmic bias in recruitment. Mandatory vs non-mandatory, centralised vs sector-based, general vs application specific – these diverging approaches raise the question as to how Australia should approach the regulation of AI, and what influence international approaches will play.

Implicit in this, is the need to carefully and thoughtfully place needle in the spectrum between pro-innovation and protection from harm. If we get this wrong, then the innovation and productivity agenda for Australia is at risk, which is increasingly a concern as we move towards a decarbonised world.

 Australia’s existing AI regulation landscape

Firstly, while Australia has voluntary, high level ethical principles (mirrored on the OECD’s AI principles) but no AI-specific regulation, it would be wrong to suggest that Australia has no laws that regulate AI. The regulatory approach in Australia has been technology neutral, and there are a multitude of technology-neutral laws of general application which are increasingly being viewed through an AI lens. These laws play a role in regulating essentially all stages of the AI lifecycle from data, design and development to deployment. They include privacy laws; cyber and data protection laws; anti-discrimination laws; tort law; product liability; consumer and competition laws; copyright and intellectual property laws; and criminal laws to name a few, together with duties of confidentiality and obligations imposed through contract.

Further, there are also laws that apply to particular sectors or organisations and their officers, such as corporations laws and directors duties; financial services laws; administrative and other laws applying to government entities; and laws that apply to particular AI use cases, such as surveillance laws, health and medical device laws and various mandatory product standards.

Consequently, we currently actually have a fairly good regulatory framework that applies to AI and regulators are increasingly turning to this existing regulatory toolkit to govern and regulate AI.

Notwithstanding that comment, the arrival of AI is challenging this existing legal framework in two main ways:

  • AI and its capability and shortcomings are inherently different to other technologies, and brings to the fore the potential for new harms and risks that we previously did not need to regulate; and
  • the exponential speed of AI development and the step change that we are in the midst of means that organisations currently lack the capacity, capability, guidance and tools to assist them to safely and confidently deploy AI or manage their existing regulatory obligations in the AI context.

What is so unique about AI?

There are a few key aspects of AI that mean:

  • our technology-neutral approach to laws and regulation, which seek to address the outcome, not the means by which that outcome is achieved; and
  • our existing approach to organisational governance of technology projects and associated risk management and quality assurance, 

are exposed to some potential gaps, and traditional legal principles are being challenged in new ways.

For example:

  • The dynamic and self-learning capabilities of certain AI systems, means that AI systems may act in unintended and unprogrammed ways; creating difficulties with error identification and operational risk management, and making it more difficult to assign legal liability and accountability in some contexts within our legal system.
  • The level of sophistication and capability of some AI systems, allows use cases that were previously impossible or impractical to deploy now a possibility, such as mass surveillance and social scoring. We didn’t need to think too deeply about regulating some of these issues previously, but now they warrant consideration.
  • While we have discrimination laws that regulate AI, these are only for certain protected attributes, however the risk of AI systems entrenching and amplifying bias within our society based on use of biased training data is a new issue that requires new thinking, including because this bias can be based on attributes not currently regulated. We are not suggesting we need to completely recut discrimination laws and reinvent that wheel, but we agree with the Discussion Paper that we need to turn our collective minds to whether widespread use of AI will allow society to progressively evolve to remove inappropriate bias, and indeed, how we can use AI to accelerate this evolution, as opposed to the opposite.
  • The often poorly understood ability of generative AI to hallucinate is obviously a risk that has tripped many an organisation up already, which is a clear example of the failing of existing risk and quality assurance frameworks and lack of organisational capacity and capability to govern AI use.
  • The use of AI to implement social engineering and influence human decision-making, or to take over activities traditionally performed by humans, raises questions around principles of human autonomy and self-agency and the impact that AI may have on our society.

These issues can be exacerbated and amplified due to the opacity of machine learning systems, given the inability of humans to understand, explain and challenge the decision-making process behind the AI model, which can undermine legal principles such as transparency, explainability and contestability in the decision-making process. And the speed and scale of which AI operates means the risks with AI are often systemic and not localised.

Further, and as noted above, the rate with which AI has and continues to develop has meant that the traditional lag-time in legal and regulatory development, and organisational capability and capacity is more pronounced. This has affected both public trust in AI, but also business confidence in investing in and using AI.

This is a real problem, because AI has the ability to bring huge benefits to society and the economy. We need to create a governance and regulatory framework, and a broader economy-wide capability, which allows organisation to confidently and safely, harness the opportunities of AI.

How to approach regulating AI

A focus of the Discussion Paper is what possible additional governance mechanisms (whether regulatory, standards, tools, frameworks, principles and business practices) are needed to support the responsible development and use of AI.

As highlighted in the Discussion Paper, the range of contexts in which AI can be used and for different purposes may often necessitate context-specific responses to regulating AI. As was recently discussed in a paper on the UK’s proposed approach, a key advantage of a sector-led approach is that the risks associated with AI systems and their resulting impact is often dependent on the AI technique used and/or the context in which they are being deployed. For example, the type and level of risk and harms associated with use of a machine vision system used in cancer diagnosis are different to a machine vision system used in logistics. A sector-based approach allows flexibility for existing regulators to address and manage the impact of the use of AI in their sector, including in response to new technological developments and specific AI use-cases, applying their contextual expertise to consider any gaps in existing sector frameworks and impose the appropriate level of scrutiny.

One such gap already identified by Australian regulators is the work done by the National Transport Commission in relation to autonomous vehicles, where the NTC found more than 700 barriers to the deployment of autonomous vehicles in Australian laws, which are designed around vehicles having a human driver.

However, a sector led approach could also lead to less oversight of general-purpose AI that impacts multiple sectors, or lead to an inconsistency in approach which can create regulatory complexity and inefficiency, particularly when AI solutions can be used across sectors.

Does this lead to the conclusion that we need centralised AI specific regulation, on the basis that it could be seen to be more efficient and comprehensive? We aren’t so sure. As has been evident from the slow pace of negotiation of the draft EU AI Act, there are difficulties in defining AI, assigning and future-proofing general risk categories and therefore what and in what circumstances AI systems should be regulated or banned.

There is also the risk that too much focus on the technology, means the laws need to be constantly updated to address new AI techniques and applications as they are being developed. You can see this in the way the draft EU AI Act has been updated recently to specifically address foundational models and general purpose AI. There is also the risk that creating an AI-specific regulation may also create duplication and overlap between existing general and sector-specific regulation which can create a complicated compliance burden for organisations, which may ultimately lead to less compliance, and  ineffective regulation, whilst also stifling innovation. And this is before you grapple with the question of why a certain act is only an issue if it uses AI to generate an outcome, and not a manual or human based process.

While a risk-based approach to AI is sensible, in the main, we think this risk-based approach can largely be achieved without general AI-specific regulation which risks being unnecessarily rigid, inflexible and cutting across existing laws. Rather, we see a sector-led approach similar to that being proposed in the UK, and in part in the EU with respect to regulated products, as key, where existing regulators are charged with giving organisations within their respective field guidance to assist them to implement responsible and ethical AI practices and mitigate key risks, without creating an unnecessary additional regulatory burden that may lead to lower compliance or hamper innovation.

Key focus areas

We think it is important to recognise that the majority of organisations involved in AI development and use are well intentioned and want to do the right thing, but lack practical capacity, capability and guidance to ensure AI is implemented responsibly and within the bounds of their existing legal frameworks. When we think about regulating AI, we think the focus should be on preventing or mitigating these inadvertent and unintentional, but harmful outcomes. This could be achieved by assisting organisations to:

  • develop capacity and capability regarding AI;
  • understand and comply with existing regulatory frameworks in the context of AI, through issuing regulatory guidance; and
  • implement processes that lead to responsible AI use, such as through assurance frameworks or technical standards.

In parallel, there is a need to:

  • consider what laws may be required to prevent or mitigate ill-intentioned actors from being able to leverage the AI models for inappropriate or intentionally harmful purposes. As posed by the Discussion Paper, there is a need to do a risk based gap analysis here, and then thoughtfully consider if and how those risks and gaps need to be addressed;
  • ensure that where harmful outcomes do occur, that the law is able to adequately respond to ensure there is appropriate accountability across the AI supply chain and redress or rectification is available. Again, there is gap analysis work to be done; and
  • cross check our existing set of technology neutral laws to ensure that there are no loopholes or cracks that mean AI implementations escape the regulatory intent. Yes, another gap analysis.  

For example, a potential gap of a general nature is in relation to our product liability laws, and what is known as the ‘state of the art defence’ or ‘development risk defence’. Under the Australian Consumer Law, a manufacturer of an AI system is strictly liable to compensate consumers for personal injury and property damage that is caused by a ‘safety defect’ in the AI system, being a defect in goods that are not as safe as a person is entitled to expect that they are (Australian Consumer Law, Part 3-5). In an AI context, this could include a defect in the design, model, source data, or the manufacturing of the AI system, a failure to adequate test the system, including to address things such as bias or to make it sufficiently secure for cybersecurity attacks. However, under the development risk defence, a manufacturer could disclaim liability if they establish that the state of scientific or technical knowledge at the time when the good was supplied was such that the manufacturer was unable to discover the defect. In other jurisdictions, such as the EU, the defence has been called into question in the context of AI, as it could be used by a manufacturer of an AI system where the AI outputs that caused the harm were unpredicted, or as a result of the AI system self-learning.

Ultimately, engaging in these types of issues requires a thoughtful and nuanced analysis and sectorial and industry expertise, involving multiple disciplines, including policy, tech, legal and civil society. Regulators themselves need the budget, and the capacity and capability to undertake this exercise, and industry needs to support this.

We also see the benefit of the introduction of centralised mechanisms to ensure consistency and coordination in approach across different sectors, including in response to general-purpose AI. These regulatory functions can be partnered with sandboxes and assurance frameworks which can assist businesses in the governance and mitigation of AI risks in the design, development, testing and implementation of AI. For example, the NSW Government has introduced the NSW Government AI Assurance Framework to assist NSW government agencies (and suppliers to NSW Government agencies) to design, build and use AI-enable products and solutions appropriately. International standards organisations, such as NIST (See for example, the NIST AI Risk Management Framework) and ISO (See for example, the ISO/IEC 23894:2023) are also developing standards and frameworks to assist organisations with AI risk management. In this regard, risks associated with AI could be regulated in a similar manner in which privacy or cybersecurity is regulated, where industry-based approaches such as technical standards or third party certification and audits are used to support outcomes-based regulation.

Ultimately however, whatever approach is adopted in Australia will have to factor in and have some alignment to international approaches given the global nature of AI, and the extra-territorial application of other proposed approaches, such as is the case with the EU and Canada.

Next steps

Here at G+T we are working with organisations to help them understand their legal and ethical obligations when it comes to AI development and use, and to implement appropriate governance processes to identify and mitigate potential risks. Contact our Technology and Digital lawyers for consultation on the regulation of AI in Australia.

Consultation on the Discussion Paper is open until 26 July 2023. This is an important opportunity for industry to help shape the future of AI regulation in Australia.

 

""