16/06/2020

This article first appeared in the Australian Financial Review on 15 June 2020.

Artificial Intelligence’s medical potential has rarely felt more important than now. Across the world data-driven technology is helping us respond to COVID-19. A software start-up flagged Wuhan’s increased pneumonia cases 9 days before the WHO. Tools in the pipeline include a machine learning app that compares Facebook posts with expert disease descriptions to pinpoint outbreak locations, and software that generates vast numbers of COVID-19 drug prototypes for scientists to evaluate. Australia does not want to miss out on the benefits of data-driven healthcare and clinical practice.

Yet as AI cements its importance to our world’s healthcare, the best way to regulate it is still unclear. Shortly before the pandemic hit, Australia’s proposed regulatory solution was to shoe-horn AI into the Therapeutic Goods Administration (TGA) regime for medical devices. This decision may prove to be a wrong turn if Australia is to fully realise the benefits of medical AI, including as a global innovator in a COVID reset digital economy.

Medical AI possesses three traits that make it difficult to fit within medical device regulation.

AI does not measure. AI infers.

Many medical devices measure the body’s internal functioning, such as blood pressure, and clinicians use that output to inform their decision-making. Regulatory approval is based on accredited laboratories run tests to verify the medical device’s reliability and accuracy. Because machines now often embed software, these laboratories evaluate software reliability, as well as mechanical reliability.

This is quite different from evaluation of AI applications. AI is probabilistic, producing inferences not exact calculations. The approval process needs to evaluate the AI’s statistical inferences’ quality. But medical AI’s potential risk or benefit is also more dependent on the human decisions that happen around its output. 

Therefore, an assessment of medical AI needs to apply a broader lens beyond the AI itself, evaluating how medical knowledge, protocols and decisions are likely to interpret and translate AI outputs into patient outcomes. This looks more like regulating the practice of medicine than lab testing individual medical devices in isolation.

AI/ML learns on the job.

In the growing category of machine learning AI, the output is more correct the more the tool is used. AI/ML refines its inferences to ideally swamp by scale statistical errors in the underlying dataset.

This continual improvement is especially challenging for regulators since there is potentially a changed tool to assess whenever data is entered. But it would be a mistake to put it in the too-hard basket by turning that learning function off. Machine learning is what helps AI augment our abilities in ways we alone cannot achieve.

Finally, AI develops at ever quickening pace. Regulation which cannot keep up is a cost to society. It denies people the improved prevention and treatment that medical innovation can bring. Australia’s proposed approval process for medical software takes approximately 12 months (and $100,000) per product.

What does this mean for how we should regulate medical AI?

The TGA’s proposals for regulating medical AI have prompted strong pushback from the technology’s developers and suppliers. They point to existing product liability and tort laws which entrench reliability, fitness for stated purpose and product conformity into any stated claims about the AI, and into the reasonable expectations of the AI’s users.

Internationally, there are other approaches. The US Food & Drug Administration excludes from registration certain ‘clinical decision support’ software, which Australia’s proposed amendments will capture. The FDA is also developing an expedited process for medical AI from “trusted” developers. But even these changes still are a variant on traditional medical device regulation, and struggle to address AI’s difference.

Regulators need a different set of regulatory tools and, importantly, a different mindset to deal with AI. The ethical, governance and consumer challenges of apps that learn present similar challenges across regulators. Whether the financial regulators keeping up with fintech apps, transport regulators approving self-driving cars, the ACCC investigating if AIs across competitors are colluding, or medical regulators weighing the AI’s risks and benefits to public health.  Regulators, usually with limited AI expertise, cannot be left to fend for themselves on AI.

Another option is filling the missing piece: AI expertise that understands how to modify regulatory processes to suit the technology. The UK provides a possible model. The UK Government established the Centre for Data Ethics and Innovation (CDEI), a cross-body expert independent body whose advice the Government must consider, to develop data-related policies and technical solutions to help regulators like the UK’s TGA equivalent reframe their AI processes and thinking.

In Australia, the Government has previously tasked the CSIRO to develop guidance on how to design, develop and use AI in a way that meets community expectations. But, as the UK has done, we need an ongoing, institutionalised and properly resourced national framework for facilitating and governing AI, which can learn as the apps do themselves.

To seize opportunities in how we use technology to diagnose, prevent and treat disease, we may need to reset the path down which the pre-COVID legislative changes directed the TGA to regulate medical AI as a medical device.  We need a broader solution across the economy to ensure that all regulatory boats rise with the AI tide.

We would like to thank Peter Leonard for his insightful and generous contribution.

""