We all have probably Googled our suspected symptoms. Even your doctor may have used Google during a consultation to confirm his or her diagnosis or show you photos of the course of the injury or disease. This is just a taste of what is to come.
AI is transforming the medical world. Methods for treating illness have gone from physical, measurement-focused products used mainly by health professionals to a universe of digital, inference-generating software that consumers happily use on their own. From smartwatches diagnosing insulin levels to search engines giving all the answers you want (or not) to your ailments, these programmed products are challenging the frameworks we have in place to regulate “medical devices”.
Australia’s Therapeutic Goods Administration (TGA) is currently consulting on what software-based products should be captured under last year’s amendments to the Therapeutic Goods Act 1989 (Cth) (Act). The amendments establish more specific classification rules for programmed or programmable devices (ie. devices that incorporate software) and software that is in itself a medical device (ie. SaMD).
What did the Act change?
Previously the Act classified most medical devices with software under the largely self-regulated Class I. The Act also did not account for harm caused by a medical device informing someone, rather than a medical device physically interacting with someone. Now the Act accounts for that in re-classifying software under 4 categories:
- software used to screen or diagnose a disease/condition;
- software used to specify or recommend treatment/intervention;
- software used to provide therapy through providing information; and
- software used to provide information for monitoring a disease/condition.
The exact Class for each software ranges from Class I to Class III, depending on:
- how much of a risk to public health is the relevant disease/condition;
- whether the disease/condition could lead to a person’s death or severe deterioration; and/or
- whether a health professional uses the software.
So for example, an app that screens for skin cancer by checking your skin for signs of cancer with instant results on your phone will likely be classified as Class III.
While the majority of Class I medical devices are self-certified with automatic processing, higher Classes must obtain conformity assessment certification from an independent body or the TGA. This certification process includes placing requirements on the device developer in relation to controls around how the medical device is designed and constructed, keeping and maintaining records and managing complaints and recalls.
The Act also updates the essential principles for software, to provide a more detailed list of safety and performance benchmarks. One of the new software-specific principles is that any programmed or programmable devices and SaMD must ensure the data influencing the device’s performance is representative, of sufficient quality, maintained to ensure integrity and managed to reduce bias.
How does this compare to overseas?
There are two ‘moving parts’ in regulation of SaMD:
- The threshold definition of which health-related apps are treated as medical devices and which are not – there are many well-being apps which collect health information. You do not necessarily want a simple fitness tracker included;
- The extent of regulation which applies if an app is treated as a medical device. This is a more complex exercise than in the traditional physical device world because, with AI, apps can do more than record and display results – they can predict, recommend and even diagnose. To add the complexity, whereas traditional medical devices were the tolls of health professionals, apps can be directed to or used by the general public without medical training to interpret, query or reject the AI outcomes.
There is a divide between the EU and US approaches to regulation of SaMD, both on the threshold issue but even more starkly on the approach to regulating SaMD once brought within the medical device regulatory regime.
The EU has a more prescriptive regulatory approach, intended to promote consumers by being prescriptive about safety, while in the US the Food and Drug Administration (FDA) has more discretion in determining what software to regulate, intended to promote consumer benefit through innovation.
EU: the EU was planning to implement its new Medical Devices Regulation in May 2020 so that its regulation:
- would capture software intended to predict or prognose the risk of a disease more broadly for the general public (even if not assisting with an individual diagnosis or treatment);
- would no longer default all software to a self-regulated class I; and
- would have extra requirements on software intended for use by “lay persons” rather than health professionals, for example requiring such software to reduce, as far as possible, risk of error in interpreting results.
However because of COVID-19 the European Commission has postponed the Regulation to May 2021.
US: Software is regulated as medical device if either:
- the software is intended to acquire, process, or analyse medical information, which the FDA interprets as whether the software can analyse “physiological signals” for medical purposes such as diagnosis or therapeutic decision making; or
- the FDA identifies that using the software is reasonably likely to have serious adverse health consequences.
Several software categories are excluded from regulation, including software that provides medical recommendations to health professionals where the professional could have reached that recommendation on their own.
In response to concern that the bureaucratic process of approval can impede innovation, the US regulatory framework is set to change, shifting further from the EU approach. The FDA is consulting on a premarket review process for AI and Machine Learning software modifications. Basically, the process will provide pre-approval for apps based on the “culture of quality and organizational excellence” - or ‘trustworthiness’ and track record - of the developer. It will include an assessment of the developer’s commitment to transparency and real-world performance monitoring, and the FDA being able to periodically review app updates.
On a spectrum between the EU and the US’s approach, Australia’s new regulatory approach is more comparable to the EU’s incoming regulation, although the Australian specifically changes classifications depending on whether a health professional or consumer uses it (which is more similar to the US approach).
But with the EU’s COVID-prompted delay in implementing its new rules, Australia will be (at least temporarily) moving ahead of the EU. As we discuss below, on the substantive rules, Australia may be going even further than the new EU rules will.
On what is the TGA consulting?
Because of industry confusion on what software is considered a medical device, the Government has asked the TGA to consult on measures to clarify which software-based products the Act now captures (the threshold definitional issue). The TGA’s consultation questions focus on which software shouldn’t be regulated by asking stakeholders:
- What software should be exempted or excluded from regulation and why?
- Would any existing software regulation, or certain software characteristics, negate the need for regulation of that software in Australia?
- Which overseas approaches could inform Australia’s approach?
The TGA’s consultation paper provides little guidance on its own preliminary thinking on these issues.
At the heart of the legislative changes for regulation of SaMD are some subtle but important changes in the regulatory onus. The Act previously defaulted to classifying most medical devices with software under the largely self-regulated Class I. As the TGA’s consultation paper points out, the new Australian approach may result in SaMD being classified in higher categories than will be the case under the new delayed EU rules. That is, Australia will have a default classification for software with a public health risk that is above Class 1, while the EU’s framework only considers public health risk in exceptional circumstances. As the consultation paper says:
“The EU rules are silent on public health risk and devices that provide therapy through the provision of information so these default to Class I in the EU. Depending on the intended purpose of the software, these devices may be Class I or higher in Australia.”
The TGA’s consultation paper implies that any risks of over-regulation can be addressed through the Minister’s powers to exempt or through regulations to exclude SaMD. However, the paper gives little guidance on what will be the “carve-out principles” behind these exemptions or exclusions. In the past, these tools have been rarely used. Besides some COVID-related products, less than 20 types of products have been excluded, and the only exemptions in force for medical devices relate to how that product is sold (for example, whether it is custom-made, or not intended for commercial supply) rather than the product’s characteristics or purpose. There is also the added risk of a lack of co-ordination and consistency between the TGA with responsibility for the primary regulation and the Minister for power over exemptions and exclusions.
Given the diversity of software-based medical apps, the speed with which apps are being developed and the ability to easily download apps from out of jurisdiction providers, regulatory flexibility needs to be central to Australia’s approach to regulating SaMD. The FDA for example, has a highly developed principles-based framework around when and how to exercise its exemptions discretion.
The TGA’s consultation is open until 13 May 2020. Click here if you want to have your say.
It is interesting to read the TGA’s consultation paper next to the FDA’s one on reviewing AI and Machine Learning in SaMD.
Authors: Peter Waters and Anna Belgiorno-Nettis