07/05/2021

Common mental health disorders are rising globally: the World Health Organisation states that between 2005 and 2015 the total estimated number of people living with depression increased by 18.4% and 14.9% for anxiety disorders. A recent paper sponsored by the Alan Turing Institute considers the health, ethical and practical issues involved in using AI to detect, diagnose and treat mental illness in settings outside the formal health sector, such as by employers in the workplace or by finance providers to identify vulnerable borrowers.

So called 'digital psychiatry' can have advantages over the white-coated human variety. A UK study found that the use of virtual human interviewers increased the disclosure of mental health symptoms among active-duty service members, who may otherwise be unwilling to seek support due to perceived stigmatization.

But there also risks with digital psychiatry. Self-disclosure of suicidal thoughts into an app or social media post can serve a therapeutic purpose for some patients. But an ill-timed or misjudged automated intervention by an AI scanning posts could lead to unintended harm by failing to respect a patient’s perceived boundaries of privacy.

Use of digital psychiatry by employers

Studies have shown how unobtrusive sensors (ie computer logging, facial expressions, posture and physiology) can be used to collect employees’ behavioural data to train automatic classifiers to infer anxiety or stress amongst employees.

But an employer needs to tread carefully. Continuous monitoring, could be construed as surveillance and ‘policing’ for compliance, and users may perceive the risk that the collected data is not just used for the purpose of promoting their well-being but also to ensure they are abiding by the ‘rules’ or being ‘efficient enough’ (the AI equivalent of key card monitoring of toilet access).

It is also possible that these new opportunities are establishing new duties of care and responsibilities to intervene (eg detecting an employee that is suffering from high levels of occupational stress and anxiety). This does not mean that digital psychiatry tools should not be used in the workplace because the employer may end up ‘finding out too much’. Rather, it will be hard for an employer to discharge its duty of care to an employee who is vulnerable without the ethical and legal frameworks that provide necessary guidance and support. Those processes need to be there in the modern workspace, whether using digital psychiatry tools or not.

Using digital psychiatry in finance industry

Mental health disorders can lead to serious financial difficulties and further exacerbate an individual’s level of suffering.

Financial regulators are moving to impose a duty of care of financial service providers towards vulnerable consumers. This duty of care tends to encourage proactive intervention, rather than just reactive support, for customers who choose to disclose information about their mental health.

The study comments that, in the search for proactive tools, financial services companies will increasingly look at whether AI can be used to detect problematic behavioural patterns in transaction data before they arise. For example, the UK’s Financial Conduct Authority (FCA) notes that one firm uses speech analytics software to parse calls for “triggers or clues to vulnerability, such as mention of illness, treatment, diagnosis, depression”.

Use of digital psychiatry on social media platforms

Because many people who are suicidal are unknown to healthcare professionals, some have begun to ask what role social media can play in detecting suicidal ideation and delivering targeted prevention. The study notes that the ability to operate at scale is what makes digital psychiatry so enticing for deployment on social media platforms.

The study commented that:

While there are many positives to social media companies using their size and influence for good in this way, there are also serious ethical concerns related to the use of digital psychiatry by social media platforms that need urgent consideration. For example, there is a lack of transparency regarding how risk assessment tools are developed and operate, leading to diminished trust in the ability of social media platforms to use such tools in an ethical and safe manner.

The online wellness industry

While wellness apps have focused on physical wellbeing (your 10,000 steps per day), apps are also now being marketed as being capable of assessing your ‘mood’.

Much of this is unregulated:

an app that claims to prevent, diagnose or treat a specific disease is likely to be considered a medical device and to attract regulatory scrutiny, whereas one that promises to 'boost mood' or provide 'coaching' might not.

Beyond the risks of self diagnosis, the study also considered that the lack of regulation of mental health wellness apps is enabling a boom in unlicensed therapists.

The risks of digital psychiatry

The study concludes that the use of digital psychiatry outside of formal healthcare raises a number of possible risks.

First, there is the risk of ‘epidemiological inflation’. Potentially well-intentioned tools that seek to identify mental health issues could contribute to a rise in their prevalence within the population.

Second, there is a risk that digital psychiatry apps can be viewed as ‘silver bullets’, capable of cutting through existing socioeconomic complexities and creating entirely new methods of care. The study says that, given the nuance and complexity of mental health, digital psychiatry apps need to be used within clear ethical, legal and professional frameworks. While these frameworks are already well-established in the case of formal healthcare systems, they may fail to translate outside of their original domain: ie into the workplace or into loan assessment processes.

Third, mental health is not a ‘statistical exercise’:

[t]he diagnosis of a mental illness... reflects a judgement, albeit often imperfect, that individuals have become different from their ‘normal’ selves in some fairly recognizable way.

The study concludes that it is crucial in the design and use of digital psychiatry apps that the judgment as to whether individuals have become “different from their “normal’ selves” remains a responsibility of a human agent.

Fourth, there are significant privacy and autonomy issues involved. Outside a clinical setting, what right does an employer or service provider have to intervene without a person requesting assistance? The study comments:

If digital psychiatry can deliver earlier screening and improved access, it is certainly worth further attention, as it may improve the sort of ‘just-in-time’ interventions that could save lives through more efficient prioritizing of resources, or help reduce unnecessary distress in sectors like financial services or education. While this may justify the use of risk assessment tools, it is important to stress that their use may also lead to some loss of privacy for individual users, due to necessary data collection.

 

Read more - Digital Psychiatry: Risks and Opportunities for Public Health and Well-Being

""

""