Skip to content Skip to footer

Nightmare Fuel: Generative AI, Synthetic Biology and Implications for Biosecurity, by Brendan Walker-Munro

Generative artificial intelligences (genAI) are seemingly everywhere. Google’s Bard and Gemini, Bing’s Copilot and the infamous ChatGPT have joined a slew of smaller programs with applications in everything from healthcare to architecture. But the dark side of AI is not far behind: regulators and police have noticed a rise in scams fuelled by genAI. Universities are buckling under the pressure of distinguishing real and fake assessment content.

In synthetic biology – the science of designing and building artificial analogues of natural processes – the genAI threat is no less real. In 2022, Professor Esvelt gave evidence to a US Senate Homeland Security and Governmental Affairs Committee. In his evidence he said that if “numerous pandemic-capable viruses [could be] credibly identified and their genome sequences are shared with the world …individual terrorists will gain the ability to unleash more pandemics at once than would naturally occur in a century”.

Such concerns are far from academic. In 2019, Canada’s only high-security biological laboratory suspended two of its researchers. Following a review of their conduct by the Canadian Secret Intelligence Service (CSIS), it was alleged one of the researchers had sent a shipment of deadly Ebola virus to the same laboratory in China at the centre of COVID-19 allegations. It was not until recently that the report of the CSIS was made public, detailing a litany of allegations involving Chinese foreign interference in Canada’s biological research program.

Where does genAI fit into synthetic biology?

The risks of genAI in synthetic biology are really three-fold. 

The first risk is that genAI could be used to give naturally occurring diseases – from the lethal Ebola virus right down to the common cold – new symptoms or capabilities, such as resilience to antibiotics or medical treatments. Often called “gain of function” research, this form of study has drawn criticism and scrutiny in equal measure. In 2013, virologist Ron Fouchier – who worked on ferret models of influenza viruses – was told by a Dutch court to get an export control licencebefore publishing his studies in an open access journal.

The second risk is the creation or development of entirely new strains of disease. The use of genAI in this form of biology could be used to generate entire genomes or organisms, allowing “biology to be engineered as easily as software, electronics and cars”. The danger in this case is because in many States like Australia, the United States, the United Kingdom and the European Union, diseases are regulated according to their biological taxonomy. Anything brand new and artificially created isn’t captured by those kinds of regulations. Because of the speed of developments in the field, reliance on legal constraints may also amount to a game of whack-a-mole regulation.

The third risk is that genAI could potentially put the technology for biological warfare in the hands of the individual, and not the State. Such genAI synthetic biology programs could allow anyone to build a disease from synthetic or naturally occurring DNA. In the near future, genAI could even allow the creation of “novel toxins; protein domains, to target specific tissues within the body with toxic elements; or other harmful proteins such as prions”.

But what kind of regulation should be used? Synthetic biology is crucial to other forms of health research, and a mere ban would stifle the innovation and invention needed in hundreds of labs around the world. GenAI is already being used to build “virtual labs”, taking up no space and generating none of the hazardous waste of the real thing. Possession of a dangerous virus that could be used for biological warfare is illegal under international law – but possessing a virtual copy of the same virus is arguably not the same thing.

How to regulate the use of genAI in synthetic biology

The challenges inherent in regulating genAI use in this new science can be observed by comparison with the emergence of synthetic drugs in the 2000s: the law simply couldn’t keep up with a taxonomic list of banned substances that could be chemically tweaked to side-step the regulations. It wasn’t until lawmakers followed the US example of banning “anything structurally and pharmacologically substantially similar to a controlled substance” that synthetic drugs were finally able to be tackled. 

Nor do export control laws generally deal with the issue of possessing certain types of biological technology, given the ubiquity of such technology in bona fide medical research. Although Ron Fouchier was eventually able to publish his results, had he generated an artificial lab for his ferrets and infected them with artificial viruses, he may not have angered the Dutch government in the same fashion.

Thus, simply listing viruses and bacteria which are subject to controls will never succeed in preventing genAI use of the genomes of these organisms.

Civilian or popular surveillance is one possible mechanism. For example, Global Biolabs tracks the building and operation of various high-security disease labs all over the globe. But such efforts can be hampered by the secrecy attending such facilities. Even in liberal Western democracies, disclosing which labs work on such agents without authorisation could be a criminal offence.

Another option could be to outlaw high-risk research of any stripe, irrespective of its potential benefits. There are examples of this in practice – in Australia, a model law for facial recognition bans the technology if it would “involve one or more extreme human rights vulnerabilities”. In the EU, the new AI Act prohibits “unacceptable use cases” of the technology, which could be expanded to its use in synthetic biology. While it would be impossible to screen every person who used genAI tools to provide synthetic biology services, it could at least create a global normative standard around such uses.

In the middle ground would be a requirement for a refocusing of existing biological control laws. One could either proscribe agents (whether virtual or not) that pose a certain level of risk, or alternately a model that regulates agents that are modified with human intent or intervention (again, whether virtual or not). Or perhaps a regulation at the “digital-physical interface”; that is, the point at which synthetic biology designs are translated from AI into physical organisms. That interface requires physical resources, knowledge, time, and offer opportunities for State intervention.   

Irrespective of the path they choose, States will quickly need to determine how – and how far – they pursue the genAI dragon. Applying the brakes too sharply could curb the innovation and adaptation of our critical life sciences. Not applying the brakes hard enough could lead to the biological equivalent of Pearl Harbour. Somewhere along the spectrum, States will need to determine precisely where they will ban the use of genAI to create new forms of life.

Brendan Walker-Munro is a senior lecturer at Southern Cross University. The views expressed above are personal, and do not represent the University or any other organisation, agency or government.

Emerging Threats Working Group

Newsletter Signup

Contact Us

Oxford, United Kingdom

Email Us Here

Emerging Threats Working Group © 2024. All Rights Reserved.