Imagine posting on social media through thought alone, whilst others control drones by way of brain-computer interface? Humans are already able to control the behaviour of cockroaches by wearing a brain-reading device that sends wireless transmissions to a brain-stimulating device attached to the cockroach’s brain. So, what’s next?
A recent landmark report by Dr Allan McCay commissioned by the UK Law Society highlights current and hypothesised future uses of neurotechnology, the potential risks and the surrounding ethical and legal issues.
What is neurotechnology?
Neurotechnology is the scientific field that combines and connects electronic devices with the human nervous system, allowing it to read from and/or write to the brain.
Some neurotechnologies are invasive and require surgery to implant a device into the brain, placing it just under the scalp or more deeply into the brain. There are also non-invasive methods that are external to the body such as headsets, wristbands or helmets, but on the whole, recording electrodes can yield more precise and specific readouts if they are placed deep inside the brain, close to the nerve cells.
In addition to electrodes which collect ‘readouts’ from the brain (e.g. to control the behaviour of those cockroaches!), there are also electrodes which are implanted into the brain to externally excite or inhibit specific nuclei, areas or fiber bundles using electric current. This makes it possible to suppress or ameliorate some symptoms of specific brain diseases, such as in patients suffering from Parkinson's Disease (PD) if medication is ineffective.
In the future, the ‘read-out’ electrodes and the ‘stimulation’ electrodes also could be integrated:
“..the idea is to make electrical stimulation dependent on the actual brain activity. The latter is recorded online, thus informing the controller and allowing it to apply the stimulation in the right moment. This closes the loop: Stimulation modifies activity, activity influences stimulation. Since control must be exerted promptly and precisely, this task is typically assigned to a dedicated signal processor….The joint operation of metal and silicon in the brain, possibly enhanced by implantable wireless technology, will enable completely new applications in the future that go far beyond current possibilities.”
Neurotechnology – current and hypothesised future uses
The potential uses of neurotechnology is endless. Although some ideas seem to be based on fiction characters seen in movies or comic books, companies like Neuralink (co-founded by Elon Musk) are already designing a microchip that will be inserted into the brain to monitor and record neural activity, and/or act to influence it. Neurotechnology could one day be used to:
- identify the precursors to disorders, such as epilepsy, Alzheimer’s diseases, blindness, anxiety, depression and insomnia, and stimulate the brain to avert symptoms.
- scan the brain for those engaging in suicidal ideations in the hope to predict a suicide attempt and successfully avert the attempt.
- read neural activity associated with mental acts (such as moving a computer cursor or kicking a football) and translates it into an equivalent commend like typing on a keyboard to enable people experiencing paralysis (such as lock-in syndrome) to communicate.
- enhance military personnel to create ‘super soldiers’ who would have enhanced cognitive or emotional capabilities, for example the ability to control a weapon by way of neural activity.
- monitor a video game players’ brain state to work out whether they are bored and adjusts the degree of difficulty of the video game in response to neural activity (as envisaged by Gabe Newell, Valve’s co-founder and president).
- decode neural activity associated with images and displaying them on a computer screen or even upload the image onto social media.
- enhance workers’ mental capabilities to compete with the increase capabilities of artificial intelligence systems.
- monitor the attention and stress levels of employees, and even transform the legal industry from billable hour to the billable attention where lawyers bill clients based on attention-monitoring capabilities.
Neurotechnology holds even greater promise – and potentially menace – when combined with other technologies such as AI robotics. Most users find prosthetic limbs difficult to control. Part of the problem is that neuroscientists do not know how to accurately decode the signals the brain sends to nerves to control muscles. Machine learning can be used, for example, to train a prosthetic limb by a user wearing a data glove on his or her uninjured hand to learn how that person’s brain instructs muscles, which then can be used to decode brain signals for the prosthetic arm recorded by the brain implant.
How complex is this issue?
Despite the excitement with neurotechnologies, it is difficult to quantify the complexity greater uptake of such technology would have on society.
Ethical challenges to what it means to be human
It is probably a well-accepted proposition that neurotechnological interventions would be ethically unacceptable if remaining a human is at risk. In the movie Ghost in the Shell, an errant robotics lab establishes a secret project to develop an artificial body, or "shell", that can integrate a human brain rather than an AI.
But the ethical issues are more much complex than the cyborg movie genre because, while the patient after an implant remains a ‘person’, the neurotechnology may change their personality or behaviour:
“…a patient.., after having undergone [a brain implant], entered a state of euphoria such that his family could no longer recognize him as the one they knew before. The patient himself, however, felt very happy in his condition; not only were the negative symptoms of Parkinson's Disease suppressed to a large extent, but also did he just feel “happier” as a result of his stimulation-induced mania. When a decision had to be made as to whether he should be admitted to a mental institution as he could no longer live on his own, several dilemmas became apparent: In which “state” should a person be asked for his informed consent on a treatment? Should the patient be consulted before or after the stimulation in cases like the one reported above? Which of the patient's “states” qualifies him or her as “self-responsible”? But we must also take into account the family environment and the health care system: How much “alienation” must relatives accept? Should society cover the costs for hospitalization?”
Shaking the foundations of our legal system
The foundational pillars of criminal law are actus reus (criminal conduct) and mens rea (guilty mind): a guilty mind is not enough nor is a criminal act by itself.
In his report, Dr McCay notes that earlier concerns “that neuroscience is going to revolutionise criminal law by demonstrating that no-one has free will… have not eventuated” - i.e. at least not yet, neurotechnology does not require a revision of the criminal theory about mens rea. Rather, Dr McCay sees the challenge of neurotechnology the other way around - the guilty intent sits separately in the human mind but the act is executed by the technology that is interfacing with the mind. Dr McCay gives the following example about revenge porn:
“a person might commit an intimate image abuse offence might be by uploading intimate images onto social media knowing that the person depicted in the images does not consent to the upload. The upload might be instigated by way of a hand controlling a mouse or trackpad, or issuing a voice command to a system such as Siri – it is noteworthy that all of these conventional ways of interacting with the virtual world involve the defendant using their system of musculature….[What if instead] a defendant [commits] the intimate image abuse offence by way of brain-computer interface…what conduct constitutes the actus reus where an offender controls a cursor by way of mental acts (such as imagining handwaves) rather than using their system of musculature, for example by using their hands to type text and move a mouse on a mousepad, as is the case in more conventional forms offending? Perhaps the law might say that the mental act of imagining the handwave is the conduct constituting the actus reus, but that could be regarded as a major step in the history of criminal law as it seems to blur an important distinction that law has thus far attempted to maintain - the distinction between the guilty mind and criminal conduct.”
To add even more complexity, Dr McCay notes that neurotechnology, like any advanced technology, carries the inevitable risk of hacking. If the hacker gets unauthorised access to data from the brain and misuses it, the person whose neuro-implant was hacked probably would not be regarded under existing principles of criminal law as having committed a crime through the unauthorised use of the hacked data. However, what if a ‘stimulation’ implant was hacked, causing the implant to stimulate the brain to hurt someone or act impulsively or physically respond to a hallucination. As Dr McCay states:
“Whilst the criminalisation of hacking is not novel, the idea of hacking brains seems to have a different quality and may well require the creation of new offences… the law would have to consider how this form of hacking did or did not fit into the scope of defences such as insanity or automatism or alternatively how it fitted into existing forms of mitigation at sentencing.”
With the connection between man and machine being closer than ever, it is important to highlight some of the potential risks involved with neurotechnology. The purpose of considering these risks is not to spark “neuro-techlash” (i.e., negative sentiment towards neurotechnology companies), but rather to understand what challenges will need to be addressed when neurotechnology becomes well entrenched in society.
- Privacy, data and surveillance – ‘mental privacy’ could be very different to 'out of body’ privacy: “[m]ost brain data generated by the body’s nervous system is unconsciously created and outside a person’s control. Therefore, it is plausible that a person would unknowingly or unintentionally reveal brain data while under surveillance.” For organisations that obtain brain data, they could surveil the behaviour and mental states of consumers, having the ability to even predict people’s behaviour or influence it.
- Individual autonomy – implanted microchips expose people to being manipulated, which could be viewed as a threat to individual autonomy and the democratic system. Dr McCay canvasses whether the State might be tempted to use neurotechnological solutions in sentencing: e.g. lower sex drives of those who are convicted of sexual offending to prevent re-offending.
- Societal division and discrimination – there may be a division between people who are neurotechnological enhanced with greater cognitive skills, and those who are not. This could not only create a societal divide, but also raises issues of equity in accessing technologies, device safety and concerns about algorithm bias.
- Agency and identity – the line between people’s identity and the neurotechnological device will blur. A person’s identity could be stolen without permission for self-serving purposes by exploiting the technology. If a person has an implanted neurotechnological device, is their behaviour or actions attributable to the individual or the device?
Dr McCay considers the challenges of neurotechnology mean that “the current human rights framework - one that has its origins in the aftermath of World War 2 - is no longer fit for purpose.”