10/02/2020

The question of how to control artificial intelligence has haunted us for thousands of years. From early Greek myths of artificial women to the birth of modern science fiction with Mary Shelley’s Frankenstein, we’ve always been fascinated by stories exploring the inherent tension between innovation and control, as well as the ever-present question of ethics. And as AI inexorably turns from fiction to reality, it’s a challenge we increasingly need to face.

There’s no question that we need to regulate AI. Even big tech companies are calling for regulation: the recent World Economic Forum in Davos saw Google’s CEO Sundar Pichai supporting “sensible regulation”. And while some see these statements as self-serving, it’s equally true that few would argue with Pichai’s statement in the Financial Times: “[T]here is no question in my mind that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it.”

It’s a potent question, and given we’ve been grappling with it since the era of ancient Greece, it’s tempting to look at the breakneck development of AI on the one hand and the snail’s pace of the law on the other and despair. How can it ever catch up? How can we use laws written and developed for the 19th and 20th centuries to harness AI that’s already making it impossible to distinguish between what’s real and what’s a deep fake, or algorithms that are besting cardiologists in detecting heart disease? 

Yet if there’s another lesson to be learnt from science fiction, it’s that no matter how advanced our technology gets, humanity still finds itself facing the same challenges. Indeed, in the Human Rights and Technology Discussion Paper launched by Australia’s Human Rights Commission late last year, commissioner Edward Santow remarked: “Sometimes it’s said that the world of new technology is unregulated space; that we need to dream up entirely new rules for this era… [t]he challenge is that AI can cause old problems – like unlawful discrimination – to appear in new forms.”

Are new rules needed?

Unlawful discrimination is a powerful example of how adapting to AI doesn’t necessarily mean rewriting our laws. At a federal level, a web of six different pieces of legislation make it unlawful to discriminate – either directly or indirectly through a rule or requirement – against people in areas such as employment because of factors such as age, race, disability, and gender. Importantly, the legislation is drafted broadly, which means it has the potential to cover common examples of AI-discrimination such as predictive systems designed to assist judges in sentencing having “an unnerving propensity for racial discrimination” and Amazon discovering belatedly that its recruitment algorithm downgraded resumes that included the word ‘women’.

Similarly, the recent landmark case of Australian Competition and Consumer Commission v Trivago N.V [2020] FCA 16 (ACCC v Trivago) shows that existing misleading and deceptive conduct laws can apply to AI, provided there is enough transparency as to how the algorithm works. The case saw Trivago’s claims that it found the cheapest rates for hotel rooms challenged, with the Federal Court finding those claims were misleading and deceptive. The judgement was based on evidence given by computer scientists that, while the algorithm did factor in the lowest price when recommending hotel rooms, it was just one of many inputs into the recommendation, leading to the cheapest hotel room only being offered in Trivago’s top position in 33.2% of its listings.

In a more complex example, there has been much discussion on the implications of AI for legal liability and business risk. These discussions highlight issues such as how to judge AI against the standards of a ‘reasonable (human) person’, a key part of establishing negligence, and how the current product liability framework only covers defects that exist at the time the product is sold, which must evolve to cover AI products that are constantly learning and being updated. Yet even in these cases, it’s clear that there’s no need to throw away all the laws developed up to this point and start afresh to properly regulate AI. Indeed, the landmark recommendations from the European Union’s Expert Group on Liability and New Technologies were to amend existing legislation and legal concepts such as duty of care rather than starting from scratch.

What these stories of biased AI, misleading and deceptive conduct and reviews of liability show us is that the foundational legal framework is there to reactively regulate AI where something goes wrong. Yes, some of these laws may need amendments to bring them up to date or to make them technology neutral, but they already exist, and we don’t need to “dream up entirely new rules”, as Edward Santow would put it. The real challenge lies in proactive and preventative control: how to implement and enforce regulation to prevent AI from going wrong in the first place.

Governments facing the challenge

Governments are already forming their own positions. In January this year, the White House released a set of guidelines for how it wants federal agencies to approach regulating AI. The guidelines are so broad they’ve been accused of being laissez-faire, and the document openly warns against overregulation stifling innovation. Singapore also launched its National AI strategy in November last year, including its Model AI Governance Framework (of which it released a second version last month) that specifically eschews laying down rules on the principles of ethical AI or discussing legal liability in favour of assisting companies to responsibly develop and use AI and “demonstrate reasonable efforts” to align their policies with relevant data protection laws. Closer to home, the Australian government is developing an AI Ethics Framework, and has released a set of 8 overarching principles to help guide companies in designing and using AI. So far, these examples share a hesitancy to lay down hard laws regulating the development and use of AI for fear of over-regulation and stifling innovation.

On the other end of the spectrum in Europe, EU Commission President Ursula von der Leyen has promised to develop AI legislation similar to the EU’s sweeping General Data Protection Regulation (GDPR) on privacy. With GDPR considered the world’s strongest data protection laws, this implies the EU is planning a very different approach to the US. That said, a recently leaked version of a draft EU white paper suggests the EU is still considering a less comprehensive, risk-based approach that may include targeted amendments to existing legislation and a voluntary labelling framework for AI developers.

Nothing new under the sun

Whichever path the EU or other countries take as they move toward implementing firmer proactive regulation of AI, it’s important to recognise that the challenge of regulating to protect communities while not stifling innovation isn’t a new one: it’s just another form of the obstacle each government or judge faces when they draft a new law or consider a case. Yes, we’ll need to consider new things, such as which areas of AI to regulate (just the development of the software? Or all the way down to how companies collect the data sets they use to train AI?). But while we shouldn’t understate how rapidly AI is developing and how much it will impact our lives, it’s also not helpful to be so paralysed at the extent of such change that we fail to recognise that we have an existing legal foundation to work from and that we’ve faced these challenges before each time we regulate anything, from manufacturing to creating food standards.

There’s no doubt that artificial intelligence will change the way we live. What it won’t do, however, is require a paradigm shift in existing laws or fundamentally change the challenges we’ve always faced with regulation. And in that, we can draw once more on ancient Greek mythology in the story of Icarus: provided lawmakers don’t stray too close to the sea of over-regulation and get their wings wet, or fly too high into the sun of boundless innovation and melt their wings all together, we will navigate a path to the regulation we need to appropriately govern AI while also reaping the benefits.

Authors: Melissa Fai and Erica Chan

Expertise Area
""