11/07/2018

Artificial Intelligence (AI) is already transforming the way businesses operate and the way we work. Industries from financial services to education are using AI to gain competitive advantage, drive efficiencies, improve service delivery and create or enhance products and to solve long-standing issues within their industry.

While AI can take many forms, including chatbots and driverless cars, at its core, it refers to technology that mimics characteristics of human intelligence in performing tasks. This includes machine learning software that learns from the data it receives and keeps refining its outputs over time.

But the increased adoption of AI raises some critical business and legal questions, including how we assess business risk and liability. After all, what happens when AI is involved in an accident or causes damage? Who should be responsible, or how should responsibility shared, among the following parties: the business, the manufacturer, the retailer, the AI software developer, the consumer or the person controlling it, the different data providers, or someone else (such as the AI system itself)?  

How AI will affect liability

One of the challenges of assessing business liability and risk when AI fails is that courts assess liability and damages based on prior legal precedent. This means that AI-based systems will inextricably be judged by applying legal concepts and assumptions based on human involvement and outdated caselaw. For example, common law claims of negligence involve traditional human concepts of fault, negligence, knowledge, causation and reasonableness and foreseeability. So what are some issues when human judgment is replaced with an AI program?

The first challenge is that one of the key benefits of AI lies in predictive analytics: the ability of certain AI software to analyse vast quantities of data and make predictions based on that data. However, the sheer scale of data sets that AI can process compared to humans means that arguably far more things are now ‘reasonably foreseeable’ to the growing number of companies that use AI to make strategic decisions. This means, potentially, that there is now a dramatic increase in the scope of what a company may be liable for.

The second challenge is that the appropriate standard of ‘reasonable foreseeability’ will become even harder for humans to judge due to the nature of AI. In the past, ‘reasonable foreseeability’ was judged according to the objective nature of the ‘reasonable (human) person’. However, the increasing use of AI promises to change this standard to what a company in the same industry with similar experience, expertise and technology would reasonably foresee. This raises two problems. Firstly, predictive analytics relies heavily on the breadth and size of big data sets too large for humans to process, which means it will be difficult for humans to judge what is ‘reasonably foreseeable’ for a given piece of AI software. Secondly,  AI predictions depend entirely on the type of data it receives. This means that, unless two companies obtain the exact same AI software and feed it exactly the same data, even competitors with the same AI technology and markets may receive and be acting on wildly different information.

Last, but certainly not least, these factors combine to broaden the net as to who may be legally liable. For many years, companies have been able to reduce their liability by arguing that other parties contributed to the loss or harm suffered. With the introduction of AI, this scope could potentially broaden to include not only the supplier of the AI software, but also the many different providers of data and connectivity. With the majority of companies using a mix of data collected internally and from third parties, added to the complexity of the supply chain, it will further complicate the question of potential legal liability.

What next?

Businesses have always had to look ahead and account for risks and potential liability. The introduction of AI merely changes the way we need to think about what may be reasonably foreseeable and the type and scope of risks to guard against.

It is not yet clear how courts will factor in these challenges of AI in determining a company’s liability. It may be that regulatory regimes will emerge in Australia and overseas to address such issues. Either way, with companies integrating AI into nearly every facet of their business, from the manufacturing of goods and provision of services to customer-facing operations, it is not a question of if AI will become a factor in accusations of negligence, product liability or professional liability but when.

Written by Albert Yuen and lawyer Erica Chan

Expertise Area
""