09/01/2023

The human brain is the body’s very own supercomputer. It uses approximately 100 billion neural connections and is capable of processing 11 million bits of information each second. Neurotechnology is a rapidly developing field of technology that is attempting to harness that powerful and complex brain power. With that growth in “mind control” comes a narrowing of the border between humans and machines.

What is neurotechnology and what are brain-computer interfaces?

Neurotechnology refers to any technology that provides greater insight into, or control over, the activity of the brain or nervous system. An example of currently available neurotechnology is brain-computer interfaces (BCIs). BCIs are quite varied in their forms and uses but, in short, BCIs establish a direct communication channel between a brain and an external device, such as a computer, prosthetic limb or mobility system. BCIs can also vary in their levels of invasiveness. In some cases, they can be used to rehabilitate patients with debilitating physical conditions, such as muscular disorders and paralysis, by re-establishing neural connections. In 2021, the global BCI market size was valued at US$1.52 billion, and it is anticipated to grow at an annual rate of around 17% over the next few years.

While the potential for this technology is vast, there are also complex ethical, medical and legal issues that still need to be considered. In this article, we unpack some of the key legal and ethical issues that will need to be addressed with the rise of this technology.

How have developments in AI and machine learning affected BCI technology?

BCIs are not new (the first human trials were conducted in the 1990s), but in recent years there has been a growing interplay between BCIs and new technologies such as machine learning and artificial intelligence (AI). One of the main issues with BCIs is that they generally require training from the user before they can be successfully used. This is where AI and machine learning offer great potential.

Most studies into the effectiveness of BCI have focused on the recovery of motor or physical ability – using brain signals to communicate with and manipulate and control devices such as artificial limbs, cochlear implants, communication aids or other external machines.

However, the real growth in BCIs lies in the use of AI to enable BCIs to control cognitive ability. Although this technology is still in its infancy, notable neurotechnology developers have recently announced that they intend to begin, or in some instances have already begun, trialling invasive BCIs incorporating AI cognitive training functions on humans.

Technologies such as AI and machine learning mean that extensive volumes of neural data from a user can now be decoded and interpreted in real time to make predictions about future behaviour. For example, data such as pulse amplitudes and durations, stimulation frequencies, device energy consumption, density information, and electrical properties of the neural tissues can now be provided to AI algorithms, which can identify useful patterns and logic in the data to simultaneously produce functional outcomes.

In some instances, BCIs using AI technology are even capable of writing to the brain through direct stimulation of particular areas. For example, a BCI may detect that a user feels depressed and then autonomously stimulate the part of the brain responsible for positive thoughts to improve the user’s mood.

What are some of the legal implications of brain-computer interfaces utilising AI and machine learning?

As technology with this capacity no longer exists exclusively in the realm of science-fiction, it is timely to consider some of the legal implications that could arise from the application of AI to human cognition.

Mental autonomy and agency

BCI technology raises questions about the boundaries of mental autonomy and agency, which offers a significant challenge to the way that our legal system currently operates. In a BCI / AI world, duties and abilities that might once have been considered specific to human intelligence, such as reasoning, deduction and learning from past experiences, could become merged with AI abilities.

By way of example, in our system of criminal law, many criminal offences are required to be proved by both physical elements and fault elements in order to attribute criminal responsibility to an individual. Fault elements or mental elements (“mens rea”) for offences refer to an individual’s state of mind, including intention, knowledge, recklessness and negligence. In a BCI / AI world it may become difficult to navigate whether a choice of action, or even perception, was the consequence of an algorithm or the individual themselves. Dr Allan McCay, Dr Nicole Vincent and Dr Thomas Nadelhoffer of Sydney Law School, UTS and College of Charleston (US) respectively, consider in their book Neurointerventions and the law: Regulating Human Mental Capacity that if a person with a BCI commits a crime: what would the ‘criminal act’ be and how would that act be solely or appropriately attributed to the individual?

Capacity, competence and informed consent

A related issue is the consideration of legal capacity and the ability to give consent or provide evidence. Once devices are implanted to a person’s brain, navigating and delineating the limits of the BCI’s influence on a user may be difficult. For example, can the giving of consent be challenged when it is obtained from an individual via a BCI that is controlled by an AI algorithm? If consent has been duly obtained prior to implantation, is it possible (and how easy or difficult might it be) to revoke this consent as the BCI’s machine learning abilities become more sophisticated? How will an individual’s capacity to give consent be evaluated for purposes of establishing legal competency? If the individual would otherwise be assessed as suffering from a mental impairment that might limit their ability to provide informed consent, can this be overcome through use of a BCI? In effect, we are asking how much legal reliance should be placed on the neurotech voice and can we rely on it to truly reflect the user’s intent?

Evidence

In a similar vein, what is the evidentiary value of data collected by a BCI? Can it be used as a quasi “lie detector”, to show intent or establish movements or actions? The reliability of such evidence would have to be established, and questions of how it is interpreted may arise. This in itself is not a new debate, as the admissibility of lie detector evidence or brain scans has been debated for some time. The interplay with any right to silence would also need to be considered.

Privacy

BCIs pose a significant risk to individuals’ privacy. In the past, some neurotechnology developers have likened BCI devices to being “a Fitbit in your skull”. The data collected in real time from an implanted BCI will incrementally increase over time. In other scenarios where data is collected, an individual can often exercise control over the categories and quantity of data they wish to disclose to a third party. They might also opt out of certain data collection processes. A BCI device holds the potential to directly collect personal data within specific, albeit sophisticated, limits without an individual’s knowledge.  

The growing proliferation of health information (which is “sensitive information” under the Privacy Act 1988 (Cth)), and its use and exploitation by businesses, is hotly debated and there is some uncertainty about whether this form of neurological data would be sufficiently protected under current Australian privacy legislation.

Algorithmic bias

The use of AI and machine learning processes in BCIs raises legal considerations of algorithmic bias. Algorithms are already harnessing volumes of training data to influence and affect people’s decision-making. Bias in algorithms can emanate from incomplete, unequal or unrepresentative training data, and if left unchecked, may lead to AI and machine learning systems being imbued with gender, racial and other biases in application. In the context of BCIs, this may result in biased algorithms detrimentally influencing a person’s cognitive functions. For more on this discussion, see: A Human Rights Approach to New Technology.

Freedom of Thought and Human Rights

The use of AI generally raises human rights issues, but none more so than in the context of BCIs. The ability to control both physical and cognitive abilities gives enormous potential for the abuse of power. Freedom of thought, freedom of conscience and freedom of movement are generally accepted as the most basic of human rights. Although these are enshrined in legislation in some jurisdictions (such as the Charter of Fundamental Rights of the European Union) that is not universally the case.  The potential for the misuse of BCIs by the State (or other actors) is already bringing forward calls for clearer protections of those liberties.

What’s next for Brain-Computer Interfaces?

Neurotechnology brings enormous potential, not the least being to transform the lives of the disabled.  Over time, given the rapid advances, BCI is likely to become a part of daily life. The substantial benefits of the technology, however, will need to be balanced against the risks. Technology developers, researchers, policymakers and regulators will need to come together to ensure that limits and safeguards are put in place without stifling the enormous technology opportunity.  

Authors: Lesley Sutton, Stephanie Essey, Sophie Bogard, Molly Allen

""