COVID, whatever its origins, presents an opportunity to talk about risks and benefits in scientific experiments.
In theory, gain-of-function research prevents pandemics. Scientists subject pathogens to adaptive pressures, novel environments, animal experiments, protein modifications, and the like to see in action how they go about becoming dangerous to human beings. It’s supposed to keep us one step ahead of the enemy, knowing their tricks before they can use them, seeing mutations before they’ve happened, so that we prevent those conditions from occurring in the wild and can rapidly develop vaccines and treatments for viruses we’ve never seen before. It’s heroic “blue sky” research, the kind you do just because of what it might produce.
In practice, gain-of-function experiments are an established, sexy-sounding way to use a lab. You’re dealing with live pathogens, so it’s obviously dangerous, which means you need to use the best equipment and best practices that make the highest biosafety levels. You need funding. You’re a research scientist and want to get published, and producing and observing a potential pandemic virus is a sure way for you to have something to write about. The more it could kill, the cooler, right? Bet the money and recognition will come pouring in. Scientists are only human.
In theoretical practice, gain-of-function research is something you do as a matter of course, wherever they’ll let you do it, as long as there’s funding. Maintaining the highest standards for biosafety levels is hard, and since you would never be so stupid as to make a mistake, you can cut some corners. Besides, suiting up is not only annoying and uncomfortable; it makes it actually more difficult to conduct the experiments. That’s where the error will come from, right? You’re actually doing better, even safer science, skipping some steps. Yes, the Wuhan Institute of Virology, a global leader in research on coronaviruses, has a biosafety level four lab, but its labs also mostly operate at a comfortable and efficient biosafety level two.
I said theoretical practice, because we don’t know. But leading scientists, and science journalists, and your smart friends, all think it’s worth investigating whether the long-named thing we call COVID-19 here leaked from a Wuhan Institute of Virology laboratory. “The science” on this is decidedly not settled. That was the point 18 virologists, epidemiologists, and the like made in a letter in Science journal Thursday, writing, “Theories of accidental release from a lab and zoonotic spillover both remain viable. Knowing how COVID-19 emerged is critical for informing global strategies to mitigate the risk of future outbreaks.” That was the point former NYT science writer Nicholas Wade made in a must-read survey of what we do know about the origins of this pandemic for the Bulletin of the Atomic Scientists.
That’s an important point to make, but when it comes to gain-of-function research it’s not even the important question. The question is not, should we continue to allow and even fund with taxpayer money this sort of gain-of-function research if it led to COVID-19? That was the question Sen. Rand Paul was asking Anthony Fauci in a recent hearing—both doctors sounding more like lawyers. No, the question is, should we fund and allow gain-of-function research when the whole point is to produce more deadly pathogens and we have always known a lab escape is possible? After all, scientists are only human.
In 2014 the Obama administration decided the answer was probably no, and banned such funding while a review was conducted. In the words of the NIH announcing the end of that moratorium in 2017, “On October 17, 2014, the U.S. Government announced that it would be instituting a funding pause on gain-of-function research projects that could be reasonably anticipated to confer attributes to influenza, MERS, or SARS viruses such that the resulting virus has enhanced pathogenicity and/or transmissibility (via the respiratory route) in mammals.” Since that’s the sort of thing “gain-of-function” research usually refers to, the ban was a big deal and plenty of people (Fauci included, reportedly) worked hard for its eventual removal.
Marc Lipsitch, on the other hand, has been ringing the alarm on the dangers of gain-of-function research for years. The professor of epidemiology at the Harvard T.H. Chan School of Public Health is currently working on modeling COVID transmission, but in 2014 he was Nature’s go-to for support of the moratorium on gain-of-function research, and in 2017 Nature quoted him again cautioning against the experiments, arguing they do not yield enough useful information to warrant the risk of a pandemic. Lipsitch is also a signatory of Thursday’s Science letter, requesting further investigation of the lab-leak hypothesis. He was kind enough to give me an interview.
Lipsitch’s primary point is one of protest, and he hopes everyone will join him; the gain-of-function research of the kind temporarily defunded from 2014 to 2017 is just not worth it. “The risks are substantial and the benefits to public health are small to nonexistent,” he said. His concern is that public health authorities and scientists are simply not asking the very basic relevant risk-benefit question when considering this sort of research, which is: Will the knowledge derived from such experiments be “worth the risk of releasing a pathogen that’s more dangerous than what we already have”? It doesn’t really matter how improbable that release is; it’s too catastrophic to allow.
Lipsitch described a simple grid we can think about in conducting what should be indeed a basic bit of analysis. On one axis is “safe” and “not safe,” and on another is “worthwhile” and “not worthwhile.” In considering funding or advisability, you can place a given research project on that grid. Safe and worthwhile? Of course! Unsafe and worthwhile? Now that is a conversation to be had. What does worthwhile mean? A major part of this discussion should be about comparing benefits, deciding if a safer project can provide enough of the expected results of a more dangerous one. “Compared to doing another scientific experiment that’s safe, is the risk of creating a pandemic worth the benefit to preventing or dealing with a pandemic?”
Lipsitch said there are cases where experiments and techniques that resemble the gain-of-function research he objects to can pass that simple test. Work with “humanized” mice or “mousified” human pathogens (I recommend reading the Wade piece for answers to any technical questions you might have) can create the conditions for a more contagious strain to emerge. But when there is not intent or effort to produce such a strain, compared to working with an existing pathogen this sort of research might be a safer way to develop and test vaccines, and thus save lives. Part of the key here is keeping the work tied to specifics, specific real-world circumstances and specific real-world needs, so real risk analysis can be done. “I believe in blue sky science,” Lipsitch said. “But I don’t believe that every type of blue sky science experiment has a direct path to saving lives.”
According to Lipsitch, governments and scientific institutions need to have “an open and transparent process” where they ask and answer these sorts of risk-benefit questions in an accountable manner when they consider funding research. Right now, he said, funding processes are not transparent, with claims about researchers’ rights and ability to speak freely acting as a cover for concerns about intellectual property and publishing. But, “IP is not a value; it’s a means to an end,” Lipsitch said. Being first to publish or patent may be valuable for particular scientists, but it’s not valuable for everyone else. It’s not about public health, when it produces the risk of a pandemic; you can’t consent to a pandemic. Lipsitch is hopeful that no matter what is concluded about COVID-19’s origins, the public-health rationale for scientific research has become more apparent to everyone—the public, lawmakers, and scientists.
In practical theory, then, it’s time for us, the public, to remember that questions such as “What are scientific experiments for?” and “Are they worth the risk?” are not questions science can answer. Those are, in the broad and classical sense, political questions, and, especially when dealing with government funding, we all deserve answers and a chance to answer. After all, scientists are only human.