You wake up in the morning and think about having bagels for breakfast. As soon as you visualise the bagels in your head, your mobile sends you a notification: ‘craving detected: there’s a local cafe selling your favourite salmon bagel for $10’. You select to have it delivered for an extra fee and in a few minutes you are enjoying your bagel. All of this has happened very quickly thanks to a little chip that was inserted in your brain and the data and information it has collected about you.
It may sound like science fiction, but this is just one of the countless applications that neurotechnology could bring us in the future (or even, possibly, now). How we regulate these technologies is a question many countries, international organisations and think tanks are grappling with.
What is Neurotechnology and what is the problem?
Neurotechnology at its core seeks to leverage the vast capabilities of the human brain. It is tasked with understanding how the brain operates, so when the time comes, it can influence its functions. With the recent advancements in neuroscience and the rapid improvements in artificial intelligence, it is now increasingly possible to ‘brain-read’ one’s thoughts, influence decision-making and decipher and control what was previously a wholly private and autonomous domain. Improving our understanding of the brain, whilst harnessing the ability to interplay with artificial intelligence, has the potential to cognitively shape the direction of our species.
On the one hand this opens the door to use the technology beneficially, so as to maximise human wellbeing. Looking at the global burden of disease, there is incredible potential for neurotechnology to restore function or enhance human capabilities – in fact this is already happening with existing medical applications.
On the other hand, those in favour of slowing down to carefully regulate neurotechnology worry about the ethical, moral and societal consequences of a lack of oversight. They argue that advances in neurotechnology require parallel advances in ‘Neurorights’. That is, a human rights framework specifically aimed at protecting our brains from the potentials of misuse or abuse of neurotechnology – whether due to the data collected, privacy, safety or otherwise (see "Getting Your Head Around Artificial Intelligence' on this topic).
Where is Neurotechnology at now?
A real-life example of neurotechnology is the viral example by Elon Musk’s Neurotechnology company, Neuralink. Here, a coin-sized chip called ‘Link’ was surgically implanted into a nine-year-old macaque monkey called Pager. By connecting thousands of micro threads from the chip to neurons responsible for controlling motion, Pager successfully played a game of Pong with its mind. Musk has heralded that Neuralink will make the paralysed walk, the blind see, replay memories and eventually turn people into cyborgs. A prospect that is exciting and, at the same time, terrifying.
However, the U.S Food and Drug Administration (FDA) believes these claims are premature. Recently, the FDA rejected Neuralink’s application to conduct human trails for its brain implant technology. The FDA cited dozens of concerns that primarily related to whether human testing can proceed safely, including: incomplete information on the impact on the human body, the potential for implants to migrate to other parts of the brain, how the device can be removed without damaging brain tissue, risks of overheating and a host of other issues. At the same time, Elon Musk himself, together with the Future of Life institute, called for a pause in artificial research to jointly develop and implement a set of shared safety protocols for advanced AI design and development.
The challenge from a regulatory perspective is striking the right balance between the current and future applications of these technologies and protecting individual’s rights, freedoms and safety.
The case for regulation – Chile’s constitutional amendment
Chile in 2021 became the first country to guarantee express protection of Neurorights at the constitutional level.
Following the unanimous approval of the Senate, Article 19 of the Political Constitution of the Republic of Chile was modified to recognise a new fundamental human right and guarantee:
“Scientific and technological development will be at the service of people and will be carried out with respect for life and physical and mental integrity. The law will regulate the requirements, conditions and restrictions for its use by people, and must especially protect brain activity, as well as the information from it;”
In addition to the constitutional change, a “neuro-protection” bill seeks to lay the groundwork for protecting the physical and psychological integrity of people through the protection of privacy, neural data and autonomy. It involves a novel interpretation of mental privacy that is intimately linked with notions of identity and individuality. It proposes to treat neural data as akin to organic tissue. This has interesting effects as the buying and selling of neural data will presumably be prohibited in the same way as the buying and selling of human organs. In doing so, it would flip the default privacy and consent regime, requiring that companies invite users to opt-in for neural data collection rather than opt-out.
Additionally, all neurotech devices will be subjected to the same regulations as medical devices, even if they are intended for consumer wellness or entertainment. This would help close off the loophole many have complained about in the US, where certain neurotechnology companies have been selling neurotech directly to consumers, so as to be regulated under less onerous consumer electronics regimes instead of the FDA.
While many support the constitutional change, others say Chile is setting a problematic example for the world, and that its rushed regulations have not been properly thought through. Concepts such as ‘mental integrity’ and ‘neural data’ need to be clarified, critics say, because a broad definition could include data that is already being collected or outside what we want to be regulated. Others say that regulation at this stage can stifle innovation and delay access to some technology that is urgently needed.
The case for a soft law approach – The Spanish and UK Model
A different approach has been the soft-law model adopted by countries such as Spain and the United Kingdom. These are toolkits that set substantive expectations for governance and use, without legal enforceability by the government.
Spain as an example has recently adopted the Digital Rights Charter. Whilst a non-binding framework, the charter serves as a reference framework to guarantee the rights of citizens in the ‘new digital reality’. The Charter contains a section titled ‘Digital rights in the use of neurotechnologies’ and the Charter does foreshadow a future regulatory framework which could cover parameters relating to identity, self-determination, sovereignty, freedom and data obtained or related to cerebral processes.
Before jumping into regulatory intervention, Spain has noted it wishes to better understand the ethical, moral, legal and commercial questions surrounding neurotechnology. Recognising this, the Spanish Government has invested €200 million and launched the National Neurotechnology Centre. The centre aims to serve as a leading body and collaborative platform with a triple focus on scientific, medical and business development and research. Neurotechnology is expected to be one of the pillars of Spain’s scientific and economic development and Spain wants to ensure it is informed before it makes any legislative changes.
In the United Kingdom, a similar approach has been taken. Following a number of consultations, the Cabinet’s office commissioned the Regulatory Horizons Council (RHC) to examine neurotechnology and make recommendations. The RHC is an independent and expert body that identifies the implications of technological innovation and provides government with impartial specialist advice on regulatory reform. In November 2022, the Council released its 82 page report, making 14 recommendations. Two key overarching suggestions underpinned their findings. These were:
- The establishment of a proportionate regulatory framework that encourages the sage commercialisation of medical neurotechnologies and addresses under-regulation concerns of non-medical neurotechnologies.
- A governance framework to address the forward-looking ethical challenges neurotechnologies may pose in the future.
International Standards and Guidance
As it currently stands, most countries do not have a specific regulatory framework to deal with neurotechnology. Even where it may be said that legislation and guidance exists indirectly to deal with questions of Neurorights, such as in the US, the effectiveness of this patchwork approach is questionable. Many countries are in an uncertain position as they are looking to other countries or international standards to help regulate what is an incredibly difficult to regulate field.
As a result, instead of acting prematurely and enforcing hard and fast rules, deferring to international guidance, standards and recommendations has generally been the preferred approach in most jurisdictions. This provides the opportunity to observe the development of neurotechnology, until there is a better understanding of how to properly regulate for it.
The OECD is one example of this. In 2019 it released the first international standard in ‘Recommendations on Responsible Innovation in Neurotechnology’ report. Instead of taking a prescriptive approach, the OECD has highlighted a number of key principles that it hopes will guide governments and innovators to anticipate and focus on, whilst also providing direction on each step of the innovation process. At the heart of the recommendations are a need for an international standard for responsible innovation in neurotechnology. While this is helpful and provides certain parameters on safety and development, many argue that the report fails to offer any practical suggestions.
The United Nations has also dealt with neurotechnology in a preliminary manner. In the General Secretary’s report, it was stated that it is time to ‘update our thinking of human rights’ and neurotechnology is mentioned as a ‘frontier’ human rights issue. The UN has identified that a new set of rights should be recognized with a view to protecting a ‘person’s cerebral and mental domain’, which includes individual mental integrity and identity. The UN has therefore signalled that Neurorights are a central consideration that states should consider without providing any guidance as to how this should practically be done.
Effectively, these approaches are attempting to help neurotechnology companies self-regulate for the time-being. The question becomes – can we trust these companies to monitor themselves?
Australia’s history of Neurotechnology
Australia is no stranger to neurotechnology and has had a long history and involvement in this space. One of the very first examples of using technology to medically optimise human wellbeing was the development of the cochlear implant. By surgically inserting a small electronic device into the cochlea, a part of the inner ear, damaged portions of the ear can be bypassed which can restore hearing. Since then, over 750,000 people worldwide have had their hearing improved. Neurotechnology companies hope to provide the same benefits, by influencing and controlling neural activity in the brain.
One of the leading Australian companies in the neurotechnology field is Synchron, which was founded in 2016 by Dr Tom Oxley and Professor Nicholas Opie, two researchers at the University of Melbourne. Synchron is on a mission to create an endovascular implant that can transfer information from every corner of the brain at scale, using it to treat paralysis.
The technology has showed incredible promise, evidenced by over $212 million in funding from a raft of investors, including Bill Gates and Jeff Bezos. Unlike Neuralink, Synchron has already been tested in human trials, with five Australians having received this device and the first implant by a US patient received in July 2022.
Australia’s approach and looking forward
Despite our involvement in this field, Australia has not been particularly active in the regulatory space. The Privacy Act as it stands does not have the foresight to adequately guide or protect against neurotechnology. Not having a specific regulatory framework may be a viable option for now, but as neurotechnology develops it is essential that Australia is not left behind.
The Australian Neuroethics Network has called for a nationally coordinated approach to the ethics of neurotechnology and has proposed a neuroethics framework. Despite this, there has been no real change or approach by the government.
Neurotechnology has the potential to offer profound advancements, but it also poses important and complex questions about how we regulate the domain of the brain and the consequential legal and ethical issues concerning data, privacy, surveillance and safety. The question we ask in the future is unlikely to be a binary one of whether there should be a regulatory system in place – rather it is the form and extent of the regulation that will be important to determine. Having the debate now is essential to ensure a robust, safe and harmonious integration with neurotechnology in the future.
Authors: Lesley Sutton and Moe Ayman