The development of chemical and biological weapons seems a somewhat distant idea in 2022; far removed from the development of mustard gas as a result of an experiment gone awry in Berlin in 1913, and later extensively used in the First World War, or the weaponisation of the Brucella bacteria during the Cold War as part of the United States’ biological weapons program. Both of these examples have two common underlying assumptions: the toxins were developed or weaponised by humans following resource-intensive research, and they were developed before the creation of international treaties prohibiting the development and use of biological and toxin weapons.
A team at Collaborations Pharmaceuticals, a small pharmaceutical company based in Raleigh, North Carolina, proved that both of these assumptions are now untrue by demonstrating the capacity of AI to create a veritable ‘library of death’ in a very short space of time, using publicly available information. The findings raise important questions for the use of AI-assisted drug discovery in the future, including whether there is a need for additional regulation to keep up with the pace of technological advancement.
AI as a tool in drug research and development
AI has an increasingly important role in the creation of chemical compounds and new drugs. By automating the trial and error scientific method, and allowing the AI to learn from its mistakes in the same way as humans can, AI-assisted drug discovery and development has the potential to make the process of creating new drugs significantly more time and cost efficient. For instance, a drug designed by UK firm Exscientia using AI and intended to treat obsessive-compulsive disorder was the first AI-designed drug to enter into phase 1 clinical trials, taking 12 months to get to that phase, compared to around five years when compared to conventional techniques that do not use AI.
While the capacity for AI-assisted drug discovery to have a positive impact is evident, Collaboration Pharmaceuticals’ experiment proved that the opposite is also true.
Library of death
Collaboration Pharmaceuticals is a drug research company, whose work involves building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery. The lab’s previous research involved designing a commercial de novo molecule generator named MegaSyn. MegaSyn was developed for the purpose of finding new therapeutic inhibitors of targets for human diseases, and was guided by AI-assisted predictions of bioactivity (in other words, predicting whether AI thinks that the substance would be beneficial or toxic to living tissues or cells).
In anticipation of a conference on the implications of technological and scientific developments on the Chemical and Biological Weapons conventions, organised by the Swiss government, a team at Collaboration Pharmaceuticals sought to ‘switch’ the M.O. of their AI from ‘good’, i.e. targeting a beneficial purpose by rewarding predicted activity and penalising toxicity, to ‘bad’, i.e. rewarding toxicity. The team trained the AI with molecules using a public database which was ordinarily used to help discover compounds to treat neurological diseases. It assessed toxicity by using a model called LD50 which assesses a lethal dose of a toxin, based on commonly known molecules like pesticides, environmental toxins and drugs.
The AI was designed to generate compounds similar to VX, which is a particularly nefarious toxin requiring only a few grains to kill a human, and was then allowed to run. Importantly, the model was only provided with data and parameters – it had no access to ‘finished products’ (actual toxic compounds) and had to ‘create’ these compounds from scratch.
In only a matter of six hours, the results were shocking: the model had generated 40,000 molecules, including VX but many other known chemical warfare agents. In addition to designing existing toxins, the AI also alarmingly designed new molecules that the team indicated looked ‘plausible’ and actually appeared to be ‘more toxic…than publicly known chemical warfare agents’.
Of particular concern to the team was that the AI actually generated molecules outside of the toxicity model of the LD50 model, which meant that the AI thought its way outside of the toxicity parameters that the team had provided. The AI has created its own cauldron of poisons.
The Collaborations Pharmaceuticals team noted that their toxicity research created a conundrum – the better that their AI model gets at predicting toxicity to develop safe beneficial drugs, the more easily that it can reverse engineer that learning to create effectively a library of death. The team noted that in many cases, and indeed in their own case, the only factor preventing the misuse of the technology was a human being in the loop “with a firm moral and ethical ‘don’t-go-there’ voice to intervene”.
Outside of an internal moral compass, the team noted that there was very little regulation of checks and balances in the AI-assisted drug discovery space. This is in stark contrast to the strict regulation of other machine learning models, such as GPT-3, which is a language model that produces human-like text. Upon release, GPT-3 was subject to strict controls such as waitlisting to prevent abuse of the model, and continues to be subject to safeguards including filtering, monitoring and review processes, and a prohibition on generating certain categories of content. Importantly, the use of GPT-3 is regulated by its creator, OpenAI, rather than being subject to governmental regulation of any kind.
While the Collaborations Pharmaceuticals team notes that the scientific community must ensure that they prevent and avoid the misuse of AI, they acknowledged that in going so far as they did, they too crossed a grey moral boundary by demonstrating the ease with which potentially toxic molecules could be created. The team proposed additional regulations, such as stricter control over the AI program and procedure to report or escalate to the appropriate authorities instances where a program is being used nefariously.
The capacity of AI to facilitate both good and bad in the research and development of new drugs is apparent, and the time and resource efficiency with which Collaboration Pharmaceuticals’ team was able to generate over 40,000 toxins should give the industry pause to consider the checks and balances that it can impose to ensure that this ground-breaking technology is not being misused.