Bond and the Biorisk Bungle
No Time to Die hints at one of the many strains of artificially engineered pandemics which could plague us. But like Bond, we can save the world.
"People want oblivion, and a few of us are born to build it for them. So here I am, their invisible God, sneaking under their skin” (Mild Spoilers Incoming)
Or so says Lyutsifer Safin, the anarchic villain in the latest James Bond blockbuster.
The 007 franchise has received plaudits (along with the inevitable ‘PC gone mad’ backlash) for updating recent films to reflect modern times. This has normally been viewed through a cultural lens, with writers toning down the misogyny dial, along with opening the door to 007 being played by people that don’t look just like me (aka not just ripped, straight white guys).
However, another tip of the cap has to be given to the writers’ room’s modern understanding of emerging technological risks.
Safin, played by Rami Malek, was a bioterrorist. He had developed nanobots that spread like a virus upon touch. Once entering the bloodstream (via either air or liquid), they are coded to specific DNA strands, such that they are only dangerous if programmed to the specific individual's genetic code.
Now, the technology for DNA-specific nanobots does not exist. But nanorobotics is indeed a developing area of nanotechnology! Whilst in early-stage development, potential uses for nanobots include the delivery of drugs for cancer treatment, repairing of white blood cells, and the monitoring of diabetes. With DARPA and the National Science Foundation beginning to work on projects that develop these technologies, expect the market for such tools to continue to grow swiftly.
Nevertheless, like so many technologies that possess the capacity to liberate, nanobots also hold the capacity to wreak havoc. A recent Future of Life Institute podcast with Dr. Filippa Lentzos detailed some of the catastrophic risks that could come from instruments such as nanobots.
“Things like drones or nanorobots, these incredibly tiny robots that can be inserted into our blood streams for instance, even insects, could be used as vehicles to disperse dangerous pathogens.
So, I guess to get to the bottom of your question, what I’m keen for people to understand, scientists, government officials, the general public, is that current developments in science and technology, or in the life sciences more specifically, are lowering barriers to inadvertent harms as well as to deliberate use, and development of biological weapons, and that there is this whole history to deliberate attempts to use the life sciences to cause harm.”
Dr. Lentzos is right - the history of bioterrorism goes back a while. As early as 600 BC, the crude use of filth, animal carcasses and contagion would be used on opposition armies fairly commonly as a war strategy. During the French-Indian War (1754-1767), Sir Jeffrey Amherst, who commanded British forces in North America, suggested the deliberate delivery of smallpox-laden blankets from a nearby hospital to Native American tribes in the Ohio River Valley.
Only now, the technology has got more targeted, more lethal, and more available. But this isn’t just about nanobots. What is part of the spectacle of Bond is also part of a wider theme of modern biosecurity risks that our public health infrastructure must pay greater attention to.
According to 80,000 Hours, global catastrophic biorisks (GCBRs) can include:
Antimicrobial/antibiotic resistance
Pandemics. These pandemics can be “natural”, or “engineered”
According to a 2008 global catastrophic risks survey, experts believed there was a 60% chance that at least one million people would die in a single natural pandemic before 2100. At the time of writing this piece, 4.8 million people worldwide have died from COVID-19. I am very interested to see what the next edition of this survey says.
It isn’t immediately clear whether the likelihood of natural pandemics occurring is increasing significantly (we have a healthier population with an improved public health infrastructure, but also have a more integrated world with intense agricultural practices). However, ‘artificial’ GCBRs are certainly becoming increasingly likely.
Arguably the most feared strain of artificial GCBRs is within ‘gain-of-function’ research. These are experiments which genetically alter an organism in a manner that may enhance the biological function of an organism.
In virology, gain-of-function research is often deployed with the intention of better understanding current and future pandemics. For example, research has been carried out to look at the transmission of influenza between ferrets. The infamous Wuhan Institute of Virology is also actively looking in bat populations to see which viruses they have and whether they are able to transmit them to humans.
This is a terrifying prospect to many people working in biorisk. If you are playing around with the levels of lethality and mutative potential of a pathogen, and something goes wrong, it isn’t just the lab that feels the burn. Humanity as a whole could face calamitous consequences.
Regardless of your views on the origins of COVID-19, lab leaks are a real thing! Philosopher Toby Ord, on the 80,000 hours podcast, had this to say about containment problems in biosecurity labs (BSLs):
“In 2007, foot-and-mouth disease, a high-risk pathogen that can only be studied in labs following the top level of biosecurity, escaped from a research facility leading to an outbreak in the UK (note, this is not to be confused with the larger 2001 outbreak, which saw over 6 million cows and sheep killed). An investigation found that the virus had escaped from a badly-maintained pipe. After repairs, the lab’s licence was renewed — only for another leak to occur two weeks later.”
This all makes for worrying reading. Whether natural, accidental, or an act of biowarfare, GCBRs are receiving insufficient attention.
However, we should be reassured that whilst these are big problems, they are also tractable.
The Biological Weapons Convention, which is the principle defence against the proliferation of biological weapons, is cash strapped. It has around 3 full-time staff, and an annual budget which is less than a typical McDonalds. Even a modest funding increase would go a long way in reducing the likelihood of 10s of millions dying in a GCBR event.
Gain-of-function research, which is a kind of dual-use research of concern (that is, research whose results have potential for misuse), can also fairly easily receive much better oversight.
Increasing efforts are being channeled into generating more coherent risk assessments of these dual-risk research methods. Institutional guidance does not exist in order to foster these best practices, and as a result even the most rudimentary of steps can begin to make significant marginal impacts.
A lot of people rightly mention managing AI as being one of the great challenges of humanity. But how AI, or more precisely, machine learning (ML), interacts with biology may be, for now, an even greater challenge.
Combining biological data with various ML methods will generate incredibly precise information about individuals. Even if biorisk doesn’t unfold in the manner that got the sadistic Safin frisky, the feasible, incremental changes must be made now to ensure that the future of humanity looks a little bit less like No Time to Die.
Of the Week - My Favourites
Podcast: The Ezra Klein Show - How to Do the Most Good (guest Holden Karnofsky)
Youtube Video: Netflix - Squid Game | Behind the Scenes
Song: Going home: theme of the local hero - Mark Knopfler
Banger that plays at St James’ Park to commemorate the takeover
Article: Duncan Weldon - How serious is Boris Johnson about wanting higher wages?