Tag Archives: influenza

Lipsitch and Galvani Push Back

COMMENTARY: The case against ‘gain-of-function’ experiments: A reply to Fouchier & Kawaoka

Over at CIDRAP, Marc Lipsitch and Alison P. Galvani have responded to critics— specifically, Ron Fouchier and Yoshihiro Kawaoka—of their recent study in PLoS Medicine. It is a thorough rebuttal of the offhand dismissal that Lipsitch and Galvani have met from Fouchier and Kawaoka and the virology community more generally.

This is a fantastic addition to the dual-use debate. Too often, stock answers given for the benefits of dual-use are put forward without sustained analysis: things like “will help us make new vaccines,” “will help us with disease surveillance,” or “will raise awareness.” Lipsitch and Galvani have drawn up roadmap of challenges that advocates of gain-of-function studies—specifically those that deal with influenza—must confront in order to the justify public health benefit of their work. We should hold researchers and funding agencies accountable to this kind of burden of proof when it comes to dual-use research.

Dual-use flow chart. Logical structure of the potential lifesaving benefits of PPP experiments, required intermediate steps to achieve those benefits (blue boxes), and key obstacles to achieving those steps highlighted in our original paper (red text). Courtesy Marc Lipsitch, 2014.

Dual-use flow chart. Logical structure of the potential lifesaving benefits of PPP experiments, required intermediate steps to achieve those benefits (blue boxes), and key obstacles to achieving those steps highlighted in our original paper (red text). Courtesy Marc Lipsitch, 2014.

Lipsitch and Galvani’s response is also important because it critically addresses the narrative that Fouchier and Kawaoka have woven around their research. This narrative has been bolstered by the researcher’s expertise in virology, but doesn’t meet the standards of biosecurity, science policy, public health, or bioethics analysis. It’s good to see Lipsitch and Galvani push back, and point to inconsistencies in the type of authority that Fouchier and Kawaoka wield.

UPDATE 06/19/14, 16:32: as I posted this, it occurred to me that the diagram Lipsitch and Galvani provide, while useful, is incomplete. That is, Lipsitch and Galvani have—correctly, I believe—illustrated the problems dual-use advocates must respond in the domain the authors occupy. These are challenges in fields like virology, biology, and epidemiology.

There are other challenges, however, that we could add to this diagram—public health and bioethical, for a start. It’d be a great, interdisciplinary activity to visualize a more complete ecosystem of challenges that face dual-use research, with an eye to presenting avenues forward that address multiple and conflicting perspectives.

How not to critique: a case study

The original title for this piece was “How not to critique in bioethics,” but Kelly pointed out that this episode of TWiV is a case study in how not to go about critiquing anything.

Last Monday I was drawn into a conversation/angry rant about an article by Lynn C. Klotz and Edward J. Sylvester, that appeared in the Bulletin of the Atomic Scientists…in 2012. After briefly forgetting one of the cardinal rules of the internet—check the date stamp— I realized the error of my ways, and started to inquire with my fellow ranters, in particular Matt Freiman, about why a 2012 article suddenly had virologists up in arms.

Turns out that the Bulletin article was cited by a study on dual-use authored by Marc Lipsitch and Alison P. Galvani; a study that was the subject of a recent post of mine . The Bulletin article draws from a working paper where the provide an estimate for the number of laboratory accidents involving dangerous pathogens we should expect as a function of hours spent in the laboratory. Lipsitch and Galvani use this figure in their analysis of potential pandemic pathogens (PPPs).

Freiman joined Vincent Racaniello, Dickson Despommier, Alan Dove, and Kathy Spindler on This Week in Virology (TWiV) on June 1 to talk about (among other things) Lipsitch and Galvani’s research. What followed is a case study in how not to critique a paper; the hosts served up a platter of incorrect statements, bad reasoning, and some all-out personal attacks.

I’d started writing a blow-by-blow account of the entire segment, but that quickly mushroomed into 5,000-odd words. There is simply too much to talk about—all of it bad. So there’s a draft of a paper on the importance of good science communication on my desk now, that I’ll submit to a journal in the near future.Instead, I’m going to pick up just one particular aspect of the segment that I feel demonstrates the character of TWiV’s critique.

“It’s a bad opinion; that’s my view.”

Despommier, at 58:30 of the podcast, takes issue with this sentence in the PLoS Medicine paper:

The H1N1 influenza strain responsible for significant morbidity and mortality around the world from 1977 to 2009 is thought to have originated from a laboratory accident.

The problem, according to Despommier, is that “thought to have originated” apparently sounds so vague as to be meaningless. This leads to a rousing pile-on conversation in which Despommier claims that he could have just easily claimed that the 1977 flu came from Middle Eastern Respiratory Syndrome because “he thought it;” he also claims that on the basis of this sentence alone he’d have rejected the article from publication. Finally, he dismisses the citation given in the article as unreliable because it is a review article,[1] and “you can say anything in a review article.”

At the same time, Dove notes that “when you’re on the editorial board of the journal you can avoid [having your paper rejected].” The implication here is that Lipsitch, as a member of the editorial board of PLoS Medicine, must have used that position to get his article to print despite the alleged inaccuracy that has Despommier so riled up. Racaniello notes that “[statements like this are] often done in this opinion–” his full meaning is interrupted by Despommier. It’s a common theme throughout the podcast, though, that Lipsitch and Galvani’s article is mere “opinion,” and thus invalid.

Facts first

If he’d done his homework, Despommier would have noted that the review article cited by Lipsitch and Galvani doesn’t mention a lab. What it does say is:

There is no evidence for integration of influenza genetic material into the host genome, leaving the most likely explanation that in 1977 the H1N1 virus was reintroduced to humans from a frozen source.[2]

So Lipsitch and Galvani do make an apparent leap from “frozen source” to “lab freezer.” Despommier doesn’t pick that up. If he had, however, it would have given us pause about whether or not is a valid move to jump from “frozen source” to “laboratory freezer.”

Not a long pause, however; there are other sources that argue that the 1977 strain is likely to have been a laboratory.[3] The other alternative—that the virus survived in Siberian lake ice—was put forward in a 2006 paper (note, after the publication of the review article used by Lipsitch and Galvani), but that paper was found to be methodologically flawed.[4] Laboratory release remains the most plausible answer to date.

The belief that the 1977 flu originated from frozen laboratory sources is widely held. Even Racaniello—at least, in 2009—holds this view. Racaniello argued that of multiple theories about the origin of the 1977 virus, “only one was compelling”:

…it is possible that the 1950 H1N1 influenza virus was truly frozen in nature or elsewhere and that such a strain was only recently introduced into man.

The suggestion is clear: the virus was frozen in a laboratory freezer since 1950, and was released, either by intent or accident, in 1977. This possibility has been denied by Chinese and Russian scientists, but remains to this day the only scientifically plausible explanation.

So no, there is no smoking gun that confirms, with absolutely unwavering certainty, that the 1977 flu emerged from a lab. But there is evidence: this is far from an “opinion,” and is far from simply making up a story for the sake of an argument. Lipsitch and Galvani were right to write “…it is thought,” because a plausible answer doesn’t make for unshakeable proof—but their claim stands on the existing literature.

Science and policy

The idea that Lipsitch and Galvani’s piece is somehow merely “opinion” is a hallmark of the discussion in TWiV. Never mind that the piece was an externally peer-reviewed, noncommissioned piece of work.[5] As far as TWiV is concerned, it seems that if it isn’t Science, it doesn’t count. Everything else is mere opinion.

But that isn’t how ethics, or policy, works. In ethics we construct arguments, argue about the interpretation of facts and values, and use that to guide action. With rare exception, few believe that we can draw conclusions about what we ought to do straight from an experiment.

In policy, we have to set regulations and guidelines with the information at hand—a policy that waits for unshakeable proof is a policy that never makes it to committee. Is there some question about the true nature of the 1977 flu, or the risks of outbreaks resulting from BSL–3 laboratory safety? You bet there is. We should continue to do research on these issues. We also have to make a decision, and the level of certainty the TWiV hosts seem to desire isn’t plausible.

Authority and Responsibility

This podcast was irresponsible. The hosts, in their haste to pan Lipsitch and Galvani’s work, overstated their case and then some. Dove also accused Lipsitch of research misconduct. I’m not sure what the rest of the editors at PLoS Medicine think of the claim—passive aggressive as it was—that one of their colleagues may have corrupted the review process, but I’d love to find out.

The podcast is also deeply unethical, because of the power in the platform. Racaniello, in 2010, wrote:

Who listens to TWiV? Five to ten thousand people download each episode, including high school, college, and graduate students, medical students, post-docs, professors in many fields, information technology professionals, health care physicians, nurses, emergency medicine technicians, and nonprofessionals: sanitation workers, painters, and laborers from all over the world.[6]

What that number looks like in 2014, I have no idea. I do know, however, that a 5,000–10,000 person listenership, from a decorated virologist and his equally prestigious colleagues, is a pretty decent haul. That doesn’t include, mind you, the people who read Racaniello’s blog, articles, or textbook; who listen to the other podcasts in the TWiV family, or follow the other hosts in other fora.

These people have authority, by virtue of their positions, affiliations, exposure, and followings. The hosts of TWiV have failed to discharge their authority with any kind of responsibility.[7] I know the TWiV format is designed to be “informal,” but there’s a marked difference between being informal, and being unprofessional.

Scientists should—must—be part of conversation about dual-use, as with other important ethical and scientific issues. Nothing here is intended to suggest otherwise. Scientists do, however, have to exercise their speech and conduct responsibly. This should be an example of what not to do.

Final Notes

I want to finish with a comment on two acts that don’t feature in Despommier’s comments and what followed, but are absolutely vital to note. The first is that during the podcast, the paper by Lipsitch and Galvani is frequently referred to as “his” paper. Not “their” paper. Apparently recognizing the second—female—author isn’t a priority for the hosts or guests.

Also, Dove and others have used Do Not Link (“link without improving ”their“ search engine position”) on the TWiV website for both the paper by Lipsitch and Galvani, and supporting materials. So not only do the hosts and guests of the show feel that the paper without merit; they believe that to the point that they’d deny the authors—and the journal—traffic. Personally, I think that’s obscenely petty, but I’ll leave that for a later post.

Science needs critique to function. Critique can be heated—justifiably so. But it also needs to be accurate. This podcast is a textbook example of how not to mount a critique.


  1. Webster, Robert G, William J Bean, Owen T Gorman, Thomas M Chambers, and Yoshihiro Kawaoka. 1992. “Evolution and Ecology of Influenza A Viruses” Microbiological Reviews 56 (1). Am Soc Microbiol: 152–79.  ↩
  2. ibid., p.171.  ↩
  3. Ennis, Francis A. 1978. “Influenza a Viruses: Shaking Out Our Shibboleths.” Nature 274 (5669): 309–10. doi:10.1038/274309b0; Nakajima, Katsuhisa, Ulrich Desselberger, and Peter Palese. 1978. “Recent Human Influenza a (H1N1) Viruses Are Closely Related Genetically to Strains Isolated in 1950.” Nature 274 (5669): 334–39. doi:10.1038/274334a0; Wertheim, Joel O. 2010. “The Re-Emergence of H1N1 Influenza Virus in 1977: a Cautionary Tale for Estimating Divergence Times Using Biologically Unrealistic Sampling Dates.” PLoS One 5 (6). Public Library of Science: e11184. doi:10.1371/journal.pone.0011184; Zimmer, Shanta M, and Donald S Burke. 2009. “Historical Perspective — Emergence of Influenza a (H1N1) Viruses.” New England Journal of Medicine 361 (3): 279–85. doi:10.1056/NEJMra0904322.  ↩
  4. Worobey, M. 2008. “Phylogenetic Evidence Against Evolutionary Stasis and Natural Abiotic Reservoirs of Influenza a Virus.” Journal of Virology 82 (7): 3769–74. doi:10.1128/JVI.02207–07; Zhang, G, D Shoham, D Gilichinsky, and S Davydov. 2007. “Erratum: Evidence of Influenza a Virus RNA in Siberian Lake Ice.” Journal of Virology 81 (5): 2538; Zhang, G, D Shoham, D Gilichinsky, S Davydov, J D Castello, and S O Rogers. 2006. “Evidence of Influenza a Virus RNA in Siberian Lake Ice.” Journal of Virology 80 (24): 12229–35. doi:10.1128/JVI.00986–06.  ↩
  5. I’m aware that peer review is not sufficient to make a work reliable, but absent evidence that the review process was somehow corrupt or deficient, it’s a far cry from mere opinion.
  6. Racaniello, Vincent R. 2010. “Social Media and Microbiology Education.” PLoS Pathogens 6 (10). Public Library of Science: e1001095.  ↩
  7. Evans, Nicholas G. 2010. “Speak No Evil: Scientists, Responsibility, and the Public Understanding of Science.” NanoEthics 4 (3): 215–20. doi:10.1007/s11569–010–0101-z.  ↩

What am I reading? 15 June 2014

This week involved some heavy reading. I’ve got a series of writing tasks ahead of me, and the last week has involved a lot of citation collection. I find that unless I’ve got most—if not all—my citations at hand, my writing is really inefficient. Lots of scratching my head going “I know that’s a Thing… where did I read it?!” and so on.

Bioethics/STS

Evans, Sam Weiss. 2014. “Synthetic Biology: Missing the Point.” Nature 510: 218.

Sam Evans—no relation—continues to fight the (one of the) good fight(s). Corresponding on behalf of 21 other correspondents, Evans reminds the readers of Nature that:

the point of supporting synthetic biology is not about making sure that science can go wherever it wants: it is about making the type of society people want to live in.

This, I think, nails down the objection I have to a lot of public debates about science and ethics. In a staggering number of contexts—everything from synthetic biology to sexual harassment—there is a tendency for some groups to wring their hands about how a particular movement, regulation, or concern will “stifle” innovation or creativity. I’m happy to see Evans calling bullshit on this particular rhetorical sleight of hand. Serial killers and terrorists can be innovative and creative; an appeal to innovation isn’t valid unless it points to more substantive values.

Glerup, Cecilie, and Maja Horst. 2014. “Mapping ‘Social Responsibility’ in Science.” Journal of Responsible Innovation 1(1): 31–50.

An analysis of different conceptions of responsibility in science, as it relates to the social impact of scientific research.

The article has an important point to make: that there are a number of different ways we understand the relationship between science and society, and that all of these conceptions are active and engaged in contemporary discourse. Unfortunately, for all the time the authors spend on unpacking the governance of science in its varied forms, they don’t unpack the concept of responsibility. Which—considering the article’s title—might be important. The problems intensify in that the review is based around a set of distinctions that aren’t hard or fast rules. This is acknowledged by the authors towards the end of the paper, but it might have been better to proceed with that as a part of the review, rather than an afterthought.

Also, if you’re going to do a review? Being a bit more transparent in one’s methodology. Looking through the references I identified dozens of papers that probably could have been included, but without better knowledge of how the authors structured their search criteria, I don’t know whether those papers were found and rejected, or just never found.

National Research Council. 2014. Emerging and Readily Available Technologies and National Security. Washington, DC: National Academy Press.

Another Big Government Report on science policy and ethics. I’m about 30 pages in, so don’t spoil the ending for me.

Murphy, Brad, and Jennifer S Reath. 2014. “The Imperative for Investment in Aboriginal and Torres Strait Islander Health.” Medical Journal of Australia 200 (11): 615–16. doi:10.5694/mja14.00632.

An important article about the investment priorities for Indigenous health in Australia. This is an issue that is really close to my heart (my grandfather was a GP, and spent half a century working in rural South Australia), and one that the current Australian Government has compromised in defunding primary care and Indigenous health.

Part of an entire issue of the MJA devoted to Indigenous health.

Infectious Diseases

Almazán, Fernando, Marta L DeDiego, Isabel Sola, Sonia Zuñiga, Jose L Nieto-Torres, Silvia Marquez-Jurado, German Andrés, and Luis Enjuanes. 2013. “Engineering a Replication-Competent, Propagation-Defective Middle East Respiratory Syndrome Coronavirus as a Vaccine Candidate.” mBio 4 (5): e00650–13. doi:10.1128/mBio.00650–13.

A “loss-of-function” study, in which the researchers engineered the Middle Eastern Respiratory Syndrome Coronavirus (MERS) to lose its transmissibility. The studies I’ve been arguing against, typically, are “gain-of-function,” and so a loss-of-function study is very interesting. Near as I can tell, the mutations the study makes use of are common to coronaviruses, and don’t correspond to extra properties—so this isn’t a gain-of-function study masquerading as loss-of-function. By interrupting how the virus transcribes its own genetic material, they were able to create a variant of the virus which can replicate, but can’t propagate. Unlike an attenuated virus—which runs the risk of reactivating—this virus appears unable to do so. The authors argue, from this, that their virus presents a better option for study and vaccine development.

Webster, Robert G, William J Bean, Owen T Gorman, Thomas M Chambers, and Yoshihiro Kawaoka. 1992. “Evolution and Ecology of Influenza A Viruses.” Microbiological Reviews 56 (1). Am Soc Microbiol: 152–79.

Wertheim, Joel O. 2010. “The Re-Emergence of H1N1 Influenza Virus in 1977: a Cautionary Tale for Estimating Divergence Times Using Biologically Unrealistic Sampling Dates.” PLoS One 5 (6). Public Library of Science: e11184. doi:10.1371/journal.pone.0011184.

Nakajima, Katsuhisa, Ulrich Desselberger, and Peter Palese. 1978. “Recent Human Influenza A (H1N1) Viruses Are Closely Related Genetically to Strains Isolated in 1950.” Nature 274 (5669): 334–39. doi:10.1038/274334a0.

Interesting papers on the evolution of the influenza viruses. My particular interest was the evolution of the 1977 influenza virus, which—according to the above papers—matches a 1950 strain so closely that researchers concluded it was likely the 1977 escaped from a laboratory sample.

Racaniello, Vincent R. 2010. “Social Media and Microbiology Education.” PLoS Pathogens 6 (10). Public Library of Science: e1001095.

I’ve a series of bones to pick with Vincent, and this was part of my research. More on that next week.

Tokiko Watanabe, Gongxun Zhong, Colin A Russell, Noriko Nakajima, Masato Hatta, Anthony Hanson, Ryan McBride, et al. 2014. “Circulating Avian Influenza Viruses Closely Related to the 1918 Virus Have Pandemic Potential.” Cell Host & Microbe 15 (692–705). doi:10.1016/j.chom.2014.05.006.

No surprise—I’ve blogged about this paper twice this week (here and here).

History of Science

Foerstel, Herbert N. 1993. Secret Science. Praeger Publishers.

A book I was unable to get my hands on during my PhD, but always wished I could. Foerstel gives some incredible history about censorship and secrecy in science. The chapter I was interested in was the nuclear sciences, as befitting my background. The highlight of the chapter was Foerstel’s retelling of the Office of Censorship requesting the writers of Superman in 1942 cease and desist in a storyline that involved an “atom smasher,” for fear that enemies of the state would infer from the story that something was up (i.e. the race for the bomb). This, mind you, while TIME was reporting that there were zero physics of chemistry papers in the annual meeting of the American Philosophical Society, inferring that something must be up behind the veil of military secrecy.

Philosophy

Kvanvig, Jonathan L. 2003. The Value of Knowledge and the Pursuit of Understanding. Cambridge University Press.

For an article I’m writing on the “ethics of knowledge.” Kvanvig investigates the idea that knowledge has some kind of normative value—in simplest terms, utility—that sets it apart, and makes it more important, than other types of beliefs. The value of meaning has received some attention regarding the internal features of knowledge that make it valuable over, say, a mere true belief, but work in philosophy on the value of knowledge through external appeal, and as a holistic concept, is sparse in the Western analytic tradition. Kvanvig is after that. I read through the introduction and first chapter, as I don’t yet have borrowing privileges at the University of Pennsylvania Library.

Kagan, S. 1992. “The Limits of Well-Being.” Social Philosophy and Policy 9 (2): 169–89.

Kagan, S. 1994. “Me and My Life.” Proceedings of the Aristotelian Society 94: 309–24.

Two articles by one of my favourite philosophers. Kagan addresses the same problem in both articles: to what extent is well-being—not the amount of it one has, but how one conceives of it—something that relies entirely on one’s internal state, and to what extent is it something that relates to external properties of the world. Kagan’s writing is nice and conversational, and (unlike a lot of analytic philosophers) he’s less worried about grinding his particular conceptual axe, as he is exploring a series of concepts.

Put another way, there are no answers in these papers. There are, however, a lot of questions.

(Kagan also has an awesome set of lectures on death on YouTube)

Economics/Law

Cheng, Cheng, and Mark Hoekstra. 2013. “Does Strengthening Self-Defense Law Deter Crime or Escalate Violence? Evidence From Expansions to Castle Doctrine.” Journal of Human Resources 48 (3). University of Wisconsin Press: 821–54.

McClellan, Chandler B, and Erdal Tekin. 2012. Stand Your Ground Laws and Homicides. IZA Discussion Paper 6705.

Cook, Philip J. 2013. “Why Stand Your Ground Laws Are Dangerous.” Scholars Strategy Network. Scholars Strategy Network.

A series or articles provided to my by Philip Cook (author of the third article) on the “stand your ground” gun laws that have emerged since Florida introduced theirs in 2005. I’m starting some work on gun control and regulation in the United States, and Philip was kind enough to correspond with me and provide me with some starting points.

Coase, Ronald Harry. 1974. “The Market for Goods and the Market for Ideas.” The American Economic Review. JSTOR: 384–91.

One of the classics of the vast literature on the right to freedom of speech. Central to Coase’s argument is that the market of goods and the “marketplace of ideas” (the quotes acknowledging that, as Sparrow and Goodin argue persuasively, the market metaphor doesn’t apply cleanly to ideas) are treated in divergent ways. Coase points out that this either means that something is wrong with our laws and philosophy, and argues that it is likely we’ve got both types of markets wrong (but in different ways). This is a reread; it’s been about five years since I last read this article.

Fiction

Preston, Richard. 1998. The Cobra Event. Ballantine Books

Excellent novel about a bioterror attack. Yes, I study bioterrorism for a living, and when I finish my work for the day I like to relax with a little light reading about fictional bioterrorism. Hits some of the most important aspects—as far as I’m concerned—of bioterrorism, and the incredibly difficulty of policing and tracking such an attack. Preston’s occasional interludes about the politics and science behind bioweapons (at least as understood in the 1990s) give serious plausibility to the novel. The science is a little dated—biology has come a long way in 16 years—but I don’t think that detracts from the novel at all.

Circulating Avian Influenza Viruses Closely Related to the 1918 Virus Have Pandemic Potential

Circulating Avian Influenza Viruses Closely Related to the 1918 Virus Have Pandemic Potential

The latest in dual-use gain of function research, Yoshihiro Kawaoka and his team seem to be intent on one-upping Ron Fouchier when it comes to spurious research. This time, the group used reverse genetics to cobble together a “1918-flu like virus, composed of avian influenza virus segments.” The new virus demonstrates higher pathogenicity in ferrets than the case-fatality rate of the original 1918 flu virus. For reference, the 1918 pandemic killed 50 million people.

The summary of the article:

  • Current circulating avian flu viruses encode proteins similar to the 1918 virus
  • A 1918-like virus composed of avian influenza virus segments was generated
  • The 1918-like virus is more pathogenic in mammals than an authentic avian flu virus
  • Seven amino acid substitutions were sufficient to confer transmission in ferrets.

In a commentary in the Guardian, the same types of justifications were rolled out by Kawaoka: awareness, medical countermeasures, and surveillance. Still lacking an argument as to why gain-of-function really promotes these, over other (less dangerous) research if at all.

How safe is a safe lab? Fouchier and H5N1, again

A quick update on gain-of-function; I’m between papers (one submitted, one back from a coauthor with edits), and gain-of-function is back in the news.

Ron Fouchier and Yoshihiro Kawaoka have responded to a new paper on gain-of-function, which appeared in PLoS Medicine last week. The paper, by Marc Lipsitch and Alison P. Galvani, takes three popular conceptions of GOF research to task: 1) that it is safe; 2) that it is helpful in the development of vaccines; and 3) that it is the best option we have for bettering our understanding of flu viruses. I’m going to concentrate, here, on Fouchier’s reply to concerns about safety when performing gain-of-function research.

For context, Lipsitch and Galvani argue that

a moderate research program of ten laboratories at US BSL3 standards for a decade would run a nearly 20% risk of resulting in at least one laboratory-acquired infection, which, in turn, may initiate a chain of transmission. The probability that a laboratory-acquired influenza infection would lead to extensive spread has been estimated to be at least 10%.

A result that Fouchier calls  “far fetched.” His reason?

The data cited by Lipsitch and Galvani include only 11 laboratory-acquired infections in the entire USA in 7 years of research, none of which involved viruses, none of which resulted in fatalities, and—most importantly—none of which resulted in secondary cases.

But Fouchier’s objection misses a couple of key points. The data Lipsitch and Galvani use comes from a 2012 paper in Applied Biosafety, which looked that the reported number of thefts, losses, or releases (TLR) of pathogens in American labs from 2004–2010. What they found was that TLRs have been increasing steadily, from 57 in 2007, to 269 in 2010. One reason given by Lipsitch and Galvani is that in 2008, significant investment was made in outreach and education about the need for accurate reporting about TLRs.

The other likely culprit? The proliferation of BSL–3 and BSL–4 laboratories.

Between 2004 and 2010, the number of BSL–3 laboratories in the USA—that we know of—more than tripled, from 415 to 1,495. This is likely an underestimate, however, because there is no requirement to track the number of BSL–3 labs that exist. The number of BSL–4 labs has also increased to 24 worldwide; 6 of these are in the USA. When you get an explosion of labs, you are likely to get a corresponding increase in laboratory accidents. That there have only been eleven laboratory acquired infections reported from hundreds of releases should cause us relief; but it doesn’t show that gain-of-function research is safe.

The problem, unacknowledged by Fouchier, is one of scale—as the number of labs increases, the number of experiments with dangerous pathogens (including H5N1) is likely to increase. Laboratory accidents are not a matter of if, but a matter of when. Significantly increasing the number of labs adds significantly to that “when.” And, in the words of Lipsitch and Galvani,

Such probabilities cannot be ignored when multiplied by the potential devastation of an influenza pandemic, even if the resulting strain were substantially attenuated from the observed virulence of highly pathogenic influenza A/H5N1.

More problematic is that Fouchier fails to note that laboratory containment failures have killed in the last decade—just not in the USA. In 2004, a researcher from the Chinese National Institute of Virology fell ill with SARS. She, while ill, infected both a nurse, who infected five others; and her mother, who died.

This represents, contrary to Fouchier’s claim, a laboratory containment failure that was a virus, did cause secondary (and tertiary) infections, and that killed. The study on which Lipsitch and Galvani base their estimate only covered the USA. A worldwide view is likely to be more concerning, not less. That’s why Lipsitch and Galvani refer to their estimate as “conservative,” and why for years there have been calls for heightened security when it comes to pandemic influenza studies.

Fouchier is right that scientific research has never caused a virus pandemic. But it’s a glaring logical error, to say nothing of hubris, to use that claim to dismiss the concern that it could happen.

There are other issues to be had with the responses of Fouchier and Kawaoka, but the overall message is clear: they continue to dismiss concerns that gain-of-function studies are risky, and provide little in the way of tangible benefits in the form of public health outcomes. I’m hoping for a more thoroughgoing reply from Lipsitch, who earlier this morning expressed some interest in doing so. This is an important debate, and needs to be kept open to a suitable range of voices.

Superflu: reply to Vox

Vox recently ran a piece by Susannah Locke on a new study in Cell, in which scientists showed only five mutations are needed to make H5N1 avian influenza transmissible in ferrets. (I’ve talked a bit about this here.) The research is controversial because it is dual-use: it could be used to advance our understanding of viruses, but it could also be used to create a deadly pandemic. You might remember a similar controversy about H5N1 back in 2011; some of the same players in that saga are involved here. In particular, the group is headed up by Ron Fouchier of Erasmus University.

Dual-use doesn’t get a lot of play in the news, so it is always nice to see some coverage. The article in Vox, however, doesn’t give the full story. In particular, the article doesn’t pay attention to some of the nuances of the 2011-2012 debate that inform—or should inform—thinking on this new research. Vox isn’t taking responses or op-eds from outsiders right now, so instead I’ve made a few notes below.

Stupid, or Simple?

A lot of the trouble back in 2011 was unavoidable: the National Science Advisory Board for Biosecurity (NSABB), for the first time in its history, recommended the partial censorship two scientific papers based on their ability to enable acts of bioterrorism. This move was always going to sit poorly with the life sciences community, where openness is the norm. And, though the NSABB later reversed their recommendation in response to revised copies of the papers, the episode prompted a year-long moratorium on H5N1 research that would lead to similar types of result, as well as new regulations at the NIH over the funding and pursuit of this so-called “gain-of-function” research.

Part of that furor, however, was caused by Ron Fouchier. Fouchier, who claimed at the 2011 European Scientific Working group on Influenza meeting in Malta that his group had created “probably one of the most dangerous viruses you can make” also referred to his work as a “really stupid experiment.” These and other bold claims circulated widely during the debate, and were assuredly a source of stress as researchers, government officials, and journalists sought to work through the issues.

Fouchier’s claims were hardly what caused the NSABB’s recommendation. But it certainly stirred up a storm, and bought about a lot more scrutiny of dual-use than normally occurs. Unsurprisingly, Fouchier stopped using such bold rhetoric. But in doing so he further muddied the waters of the public debate, by excessively playing down the risks of his research.

A great example can be found in Fouchier seeking to qualify his “really stupid” statement:

In his Malta talk, Fouchier called this a “really stupid” approach, a phrase widely interpreted to mean he regretted it. In fact, he says, he just meant that the technique, called passaging, is a simple one compared to the sophistication of creating targeted mutations. The confusion may have stemmed in part from the fact that the Dutch word for “stupid” can also mean “simple.”

So something was possibly lost in translation. Yet Fouchier’s about-face doesn’t mesh well with other claims he’s made about the risks of his research: for example, that “bioterrorists can’t make this virus [that my team created], it’s too complex, you need a lot of expertise.”

He wants to say that the experiment is simple, but also that it’s too complex for bioterrorists. Yet we already know that bioterror, whether it be committed in Oregon or Japan, is not necessarily the purview of the academy. We also know, from history, that having a high technical competence is far from a guarantee moral probity.

Fouchier’s also doesn’t address the risks that come from widespread attempts to reproduce these results. A recent article in Slate illustrated how SARS, foot and mouth, and H1N1 flu have all escaped their labs. We shouldn’t just be concerned about a bioterrorist brewing a batch of superflu, but just the significance of run-of-the-mill laboratory accidents.

How dangerous?

In trying to understand the risks of the H5N1 studies, few things are more contested than the case-fatality rate (CFR) of H5N1: how many people with the disease die, divided by the number of people who get the disease. The WHO lists 650 reported cases of H5N1, with 386 deaths, giving rise a CFR of 60%. That’s a really big number—most influenza pandemics have a CFR of less than 0.1%.

Supporters of the research have insisted that 60% can’t be the true CFR. The reason they give is that there must be more cases we don’t know about; cases that either aren’t reported, or cases where infected individuals don’t display symptoms severe enough to trigger detection. For each case we don’t know about that isn’t fatal, the CFR drops. And if the CFR drops, then gain-of-function research is less of a problem.

Appealing to the CFR, however, runs into two problems. For a start, there simply aren’t—at least as far as we can tell—that many people walking around with H5N1 who aren’t being picked up. A study in 2008 suggested that there may be subclinical infections in Thailand; research in China, however, found that there was little evidence for subclinical infections. An investigation into two villages in Cambodia found that only a small number of individuals tested positive for H5N1: 1% of the sample.

But a bigger problem is that 60% is a huge CFR. “Spanish Flu,” which killed 50–100 million people in 1918, only had a CFR of around 2.5%. Standing a potential human-transmissible H5N1 against Spanish Flu should give people nightmares. But what it means for Fouchier and company is that even if it turned out that for every 1 case of H5N1 we found, we missed 24 more, we’d still cause to be worried.

Power in the platform

This all matters, quite simply, because there is power in the platform. Public debates rise and fall on the data supplied, and who is given the authority to depict a particular debate. Fouchier is an expert; of that there is no doubt. But he is an expert in influenza. Not bioterrorism, not public health, and not dual-use. He’s also been demonstrably confusing when it comes to the ways he presents his research.

It’s troubling, then, that the Vox piece is almost entirely based on Fouchier’s account of what happened—an account which is, as above, sketchy. It’s really good to see dual-use in the news, but on this issue more than one perspective is necessary. The history of dual-use should be written by more than one group.

The Center for Biological Futures over at The Fred Hutchison Cancer Research Center maintains a good repository of documents relating to the H5N1 dual-use case in 2011-2012.

Gain-of-function: redux

The gain-of-function crowd are at it again. A new study, released in Cell, has described how H5N1 highly pathogenic avian influenza can be engineered to become transmissible between mammals, using five mutations. The new research builds off the 2011-2012 work of Ron Fouchier and Yoshihiro Kawaoka; work that caused a ruckus for its potential for misuse, and led to the National Science Advisory Board for Biosecurity (NSABB) initially recommending the censorship of the papers.

The NSABB hasn’t commented on the Cell study; according to Michael Osterholm, the group hasn’t met in 18 months. Near as I can tell, they didn’t meet or we’re consulted about a similar study in H7N1. In both cases, the informal risk assessments pointed to the benefits of the studies outweighing the risks. And that, apparently, was that.

The lack of consultation with the NSABB is concerning. It raises questions about what exactly that board is doing about the concerns that define its existence. It also stomps all over a staple of good ethics and regulation: treat like cases alike. Rather than assuming that the likeness of these publications warrants immediate publication, however, we should remember the history. The NSABB’s decision was controversial, and followed serious deliberation. This should have, at the least, led to these new studies being flagged for review by the NSABB. That the NSABB wasn’t involved is, to me, more problematic than the new publications.

Make no mistake, there will be more research like this: the flu lobby has informed us that this is the case. In the wake of the Tamiflu revelations—that the antiviral stockpiled by governments the world over doesn’t work, and actually causes serious adverse events in children—pressure for gain-of-function research is also likely to increase. As governments scramble to appear to be doing something, I suspect more of these publications will be funded.

But to what end? The arguments haven’t changed since 2011. Predicting viral mutation with an eye to vaccines or surveillance isn’t a good strategy at all, because what we can create in the lab isn’t necessarily what nature will brew up. It is like playing blind darts: you aren’t likely to succeed, and could put someone’s eye out in the process. This isn’t an argument that can be sustained, and requires a misinterpretation of how viruses work to seem convincing.

Moreover, knowing the specific evolutionary pathway of a virus is hardly the most burdensome part of vaccine production. Vaccines need to be created, tested, stored, distributed, and used. These latter stages are incredibly time-consuming and costly. Gain of function research doesn’t add to these steps in a meaningful way.

Put another way, if the flu lobby were some of my former undergraduate chemistry students, I’d be failing them for not being able to identify the limiting reagent in this reaction, much less give me a valid rate equation.

It also detracts from the fact that the politics and logistics of vaccines don’t make them great tools in the event of a pandemic. And without Tamiflu, our best options are the social and political—and ethically fraught—solutions that rely on people and public health. Isolation; quarantine; surveillance. These raise important rights and privacy issues that haven’t been adequately discussed. I don’t think many people are ready for what quarantine means in the event of a serious outbreak.

The malevolent uses of dual-use research of concern, however, become more acute the less prepared we are. Gain-of-function research needs an institution to benefit us; it needs us to be unprepared or complacent to harm us. We’re definitely unprepared right now.

I’m not saying we should censor the new H5N1 or H7N1 research. I’ve made it clear elsewhere that we are not be wrong to do so, but in this case that ship has long since sailed. What we need to do now is have a bigger, better conversation about pandemic preparedness, public health, and dual-use.

NB: Written in transit. I’ll endeavour to have more comprehensive links up when I’m back at my computer.