Tag Archives: Fouchier

Ahistorical narratives in a time of science.

[Update: someone at The Atlantic confirmed for me that this was not so much their article, as it was run  “as part of our partnership with the site Defense One.” Defense One is a part of the AtlanticMedia group, which owns both publications. As the science editor for Defense One—where the piece was first published—it isn’t totally clear to me who edited Tucker’s work for content, other than… himself? Transparency and accountability, anyone?]

Patrick Tucker has a piece in The Atlantic titled “The Next Manhattan Project.” It concerns the current dual-use gain-of-function saga—now the so-called deliberative process about biosafety. It is, in short, a piece of ahistorical fiction. Here’s why—or, here is one list of reasons why.

1) “In January 2012, a team of researchers from the Netherlands and the University of Wisconsin published a paper in the journal Science about airborne transmission of H5N1 influenza, or bird flu, in ferrets.”

False. It was two papers: one in Nature by University of Wisconsin-Madision researchers; one in Science by Dutch researchers. When a writer for The Atlantic can’t Google something that happened 3 years ago, you can bet the previous century is going to be a challenge.

2) Eschewing the history behind current events: “[the 2012 paper (should be papers)] changed the way the United States and nations around the world approached manmade biological threats.”

False. The 2011 (it started in 2011, not 2012) controversy was a continuation of a, by then, decade-old debate about what is now called dual-use research of concern. This started in 2001, when a team of Australian researchers published work describing the creation of (in VERY simplistic terms) a super-poxvirus.There was a CIA report, and a NAS committee. Oh, and does anyone remember Amerithrax?

3) “it solved the riddle of how H5N1 became airborne in humans.”

False. Hilariously, the standard defense of the 2012 studies (remember, The Atlantic, plural) is that they don’t show how H5N1 can transmit via aerosolized respiratory droplets. Vincent Racaniello commonly refers to this as “ferrets are not people.” There’s a complexity about animal models that doesn’t lend to those kinds of easy conclusions. It wasn’t the end result of these papers (or the papers that followed), and it certainly wasn’t the intent of the researchers.

4) Eschewing the reasons behind the Manhattan Project.

The Manhattan Project has a complex history. A group of independent, politically minded—largely emigre—scientists; a world on the edge of war; a novel and particular scientific discovery with a potentially catastrophic outcome; and a belligerent power (well, powers—the Japanese and Russians had programs, in addition to the Nazis) the scientists had good reason to suspect was pursuing said technology.

The 2012 story has almost no parallel with these contexts—much less has an organizational, clearly defined set of ends, or unilateral mandate with which to achieve those ends. The existential threat in the background of the Manhattan Project is absent here—there is no Nazi power. If we truly considered H5N1 highly pathogenic avian influenza to be an existential threat, our public health systems and scientific endeavors would look totally different.

5) Misrepresenting the classified complex.

Despite it being the single comparison Tucker draws between the 2012 studies (plural) and the Manhattan Project, Tucker doesn’t discuss the classified complex as any more than a passing comment. He boils the entire conversation down to “but now the Internet makes classifying things hard.”

Never mind that the classified community was remarkably successful at its job, to the point where it invented ways to create information sharing within an environment of total secrecy. The classified community continues to do its work today—just because we don’t pay much attention to Los Alamos, Oak Ridge, or Lawrence Livermore don’t mean they don’t exist.

Tucker also misses some of the human factors that would actually make his claims interesting. Between Fuchs and the Rosenbergs, ye olde security could be compromised in much the same way as it is today: too much trust of the wrong people, and a bit of carelessness inside the confines of a community that thinks itself insulated. If anything, the current debate about dual-use is more about misplaced trust and overconfidence than it is about nukes.

***

These are only five of a variety of problems with Tucker’s article. What bothers me most is that the headline grants a legitimacy to one perspective on the current debate that simply isn’t warranted. These scientists aren’t racing against the clock to avert a catastrophe—and if they are, their methods are questionable at best. The current debate is far more nuanced, and far less certain than the conversation that went down in Long Island in 1939. And that’s saying something, because the debate then was pretty damned nuanced.

What would the Next Manhattan Project really look like? Lock the best minds in biology in a series of laboratories across the country—or world, that’s cool too. Give them at least $26 billion. And give them charge of creating a cheap, easily deployable, universal flu vaccine.

That’d be great. Or, at least, it’d be much better than The Atlantic’s piece from yesterday.

Lipsitch and Galvani Push Back

COMMENTARY: The case against ‘gain-of-function’ experiments: A reply to Fouchier & Kawaoka

Over at CIDRAP, Marc Lipsitch and Alison P. Galvani have responded to critics— specifically, Ron Fouchier and Yoshihiro Kawaoka—of their recent study in PLoS Medicine. It is a thorough rebuttal of the offhand dismissal that Lipsitch and Galvani have met from Fouchier and Kawaoka and the virology community more generally.

This is a fantastic addition to the dual-use debate. Too often, stock answers given for the benefits of dual-use are put forward without sustained analysis: things like “will help us make new vaccines,” “will help us with disease surveillance,” or “will raise awareness.” Lipsitch and Galvani have drawn up roadmap of challenges that advocates of gain-of-function studies—specifically those that deal with influenza—must confront in order to the justify public health benefit of their work. We should hold researchers and funding agencies accountable to this kind of burden of proof when it comes to dual-use research.

Dual-use flow chart. Logical structure of the potential lifesaving benefits of PPP experiments, required intermediate steps to achieve those benefits (blue boxes), and key obstacles to achieving those steps highlighted in our original paper (red text). Courtesy Marc Lipsitch, 2014.

Dual-use flow chart. Logical structure of the potential lifesaving benefits of PPP experiments, required intermediate steps to achieve those benefits (blue boxes), and key obstacles to achieving those steps highlighted in our original paper (red text). Courtesy Marc Lipsitch, 2014.

Lipsitch and Galvani’s response is also important because it critically addresses the narrative that Fouchier and Kawaoka have woven around their research. This narrative has been bolstered by the researcher’s expertise in virology, but doesn’t meet the standards of biosecurity, science policy, public health, or bioethics analysis. It’s good to see Lipsitch and Galvani push back, and point to inconsistencies in the type of authority that Fouchier and Kawaoka wield.

UPDATE 06/19/14, 16:32: as I posted this, it occurred to me that the diagram Lipsitch and Galvani provide, while useful, is incomplete. That is, Lipsitch and Galvani have—correctly, I believe—illustrated the problems dual-use advocates must respond in the domain the authors occupy. These are challenges in fields like virology, biology, and epidemiology.

There are other challenges, however, that we could add to this diagram—public health and bioethical, for a start. It’d be a great, interdisciplinary activity to visualize a more complete ecosystem of challenges that face dual-use research, with an eye to presenting avenues forward that address multiple and conflicting perspectives.

What Am I Reading? 8 June 2014

One of the things I’m asked most often by non-philosopher, non-bioethics types is “what exactly is that you do during the day?” The answer, by and large, is that I read and write. My reading can be pretty diverse and—at times—obscure. Below are a handful of the things I read this week.

Dewey, John. 1929. Experience and Nature. London: George Allen & Unwin, pp. 1–100

My supervisor-to-be and I were talking about American philosophy late last week, when I revealed to him that I’d not read any of John Dewey’s work. Jonathan, a gung-ho pragmatist, recommended I get stuck in to Experience and Nature. So I did. It is—in Jonathan’s own words—turgid, but there are gems.[1]

Grande, David, Sarah E Gollust, Maximilian Pany, Jane Seymour, Adeline Goss, Austin Kilaru, and Zachary Meisel. 2014. “Translating Research for Health Policy: Researchers’ Perceptions and Use of Social Media.” Health Affairs. doi:10.1377/hlthaff.2014.0300.

A paper about the trends in social media use among health and healthcare researchers. Nothing particularly stunning in the conclusions: apparently, older academics don’t like social media much, and in general the healthcare sector are a bit apprehensive about communicating via Twitter.

There’s an interesting tidbit tucked in the bottom of Table 1, however. There was a drop-off in female use of social media in this sector that appears divergent from male use. There isn’t any mention of gender differences in the study, but it certainly seems to stand out.

Moya-Anegón, Félix, and Víctor Herrero-Solana. 2013. “Worldwide Topology of the Scientific Subject Profile: a Macro Approach in the Country Level.” PLoS One 8 (12). Public Library of Science: e83222. doi:10.1371/journal.pone.0083222.

Moya-Anegón and Herrero-Solana present a cluster analysis of publications across different countries. They found three clusters of research and, matching them against country outputs, posited a geographic distribution of research interests.

I thought their discussion could be a bit more robust, however, and would like to see some more work done on the why of particular research outputs in countries. The Eastern Bloc, for example, lost a lot of its talent in the life sciences during the Lysenko Affair; a murderous head of Russian science killing of advocates of gene theory is going to cramp a country’s style in genetics. Building these complex stories into robust and current data would be an excellent addition to the field.

Glass, Jonathan D, Nicholas M Boulis, Karl Johe, Seward B Rutkove, Thais Federici, Meraida Polak, Crystal Kelly, and Eva L Feldman. 2012. “Lumbar Intraspinal Injection of Neural Stem Cells in Patients with Amyotrophic Lateral Sclerosis: Results of a Phase I Trial in 12 Patients.” Stem Cells 30 (6). Wiley Online Library: 1144–51.

Read as part of a post Kelly and I are putting together. More on that soon.

Resnik, David B. 2013. “H5N1 Avian Flu Research and the Ethics of Knowledge.” Hastings Center Report 43 (2). Wiley Online Library: 22–33. doi:10.1002/hast.143.

I’m writing a paper about—surprise, surprise—dual-use, and in no small part am responding to David’s treatment of the so-called “ethics of knowledge.” This, and a range of other papers that I haven’t listed here, were background to that piece.

Sobel, D. 1994. “Full Information Accounts of Well-Being.” Ethics 104 (4): 784–810.

There’s a lot of philosophy out there, and I’m doing my best to make sure that my own accounts of ethics aren’t just reinventing other people’s concerns. If they are, I’d rather just cite them and save the scholarly space for something that’s more contribution-y. Sobel has some interesting stuff on how we account for our own and other’s wellbeing; it’s particularly pertinent for anyone working in economics or social policy (me).

Mableson, Hayley E, Anna Okello, Kim Picozzi, and Susan Christina Welburn. 2014. “Neglected Zoonotic Diseases—the Long and Winding Road to Advocacy.” PLoS Neglected Tropical Diseases 8 (6): e2800. doi:10.1371/journal.pntd.0002800.

Great little article on the action—or lack thereof—of international agencies and government to recognize or address issues surrounding neglected zoonotic diseases. Discussion centers around the need for advocacy, and what that entails at a high level.

Fouchier, Ron A M, Vincent Munster, Anders Wallensten, Theo M Bestebroer, Sander Herfst, Derek Smith, Guus F Rimmelzwaan, Björn Olsen, and Albert D M E Osterhaus. 2005. “Characterization of a Novel Influenza a Virus Hemagglutinin Subtype (H16) Obtained From Black-Headed Gulls.” Journal of Virology 79: 2814–22.

Reading a paper by Ron Fouchier? Not surprising, considering he a frequent subject of my writing (here, here, here) .

Many of these titles are available online, for free. For those who aren’t, I’m happy to provide #canhazpdf assistance.


  1. Dewey really had an optimistic belief about the way science worked. This isn’t surprising for the early 20th century. Still, Dewey writes “physicists did not think for a moment of denying the validity of what was found in that experience [provided by the results of the Michelson-Morely experiments], even though it rendered questionable an elaborate intellectual apparatus and system.” That? Isn’t what happened. In fact, there were a litany of experiments that followed trying to measure the aether flux due to the earth’s motion.  ↩

How safe is a safe lab? Fouchier and H5N1, again

A quick update on gain-of-function; I’m between papers (one submitted, one back from a coauthor with edits), and gain-of-function is back in the news.

Ron Fouchier and Yoshihiro Kawaoka have responded to a new paper on gain-of-function, which appeared in PLoS Medicine last week. The paper, by Marc Lipsitch and Alison P. Galvani, takes three popular conceptions of GOF research to task: 1) that it is safe; 2) that it is helpful in the development of vaccines; and 3) that it is the best option we have for bettering our understanding of flu viruses. I’m going to concentrate, here, on Fouchier’s reply to concerns about safety when performing gain-of-function research.

For context, Lipsitch and Galvani argue that

a moderate research program of ten laboratories at US BSL3 standards for a decade would run a nearly 20% risk of resulting in at least one laboratory-acquired infection, which, in turn, may initiate a chain of transmission. The probability that a laboratory-acquired influenza infection would lead to extensive spread has been estimated to be at least 10%.

A result that Fouchier calls  “far fetched.” His reason?

The data cited by Lipsitch and Galvani include only 11 laboratory-acquired infections in the entire USA in 7 years of research, none of which involved viruses, none of which resulted in fatalities, and—most importantly—none of which resulted in secondary cases.

But Fouchier’s objection misses a couple of key points. The data Lipsitch and Galvani use comes from a 2012 paper in Applied Biosafety, which looked that the reported number of thefts, losses, or releases (TLR) of pathogens in American labs from 2004–2010. What they found was that TLRs have been increasing steadily, from 57 in 2007, to 269 in 2010. One reason given by Lipsitch and Galvani is that in 2008, significant investment was made in outreach and education about the need for accurate reporting about TLRs.

The other likely culprit? The proliferation of BSL–3 and BSL–4 laboratories.

Between 2004 and 2010, the number of BSL–3 laboratories in the USA—that we know of—more than tripled, from 415 to 1,495. This is likely an underestimate, however, because there is no requirement to track the number of BSL–3 labs that exist. The number of BSL–4 labs has also increased to 24 worldwide; 6 of these are in the USA. When you get an explosion of labs, you are likely to get a corresponding increase in laboratory accidents. That there have only been eleven laboratory acquired infections reported from hundreds of releases should cause us relief; but it doesn’t show that gain-of-function research is safe.

The problem, unacknowledged by Fouchier, is one of scale—as the number of labs increases, the number of experiments with dangerous pathogens (including H5N1) is likely to increase. Laboratory accidents are not a matter of if, but a matter of when. Significantly increasing the number of labs adds significantly to that “when.” And, in the words of Lipsitch and Galvani,

Such probabilities cannot be ignored when multiplied by the potential devastation of an influenza pandemic, even if the resulting strain were substantially attenuated from the observed virulence of highly pathogenic influenza A/H5N1.

More problematic is that Fouchier fails to note that laboratory containment failures have killed in the last decade—just not in the USA. In 2004, a researcher from the Chinese National Institute of Virology fell ill with SARS. She, while ill, infected both a nurse, who infected five others; and her mother, who died.

This represents, contrary to Fouchier’s claim, a laboratory containment failure that was a virus, did cause secondary (and tertiary) infections, and that killed. The study on which Lipsitch and Galvani base their estimate only covered the USA. A worldwide view is likely to be more concerning, not less. That’s why Lipsitch and Galvani refer to their estimate as “conservative,” and why for years there have been calls for heightened security when it comes to pandemic influenza studies.

Fouchier is right that scientific research has never caused a virus pandemic. But it’s a glaring logical error, to say nothing of hubris, to use that claim to dismiss the concern that it could happen.

There are other issues to be had with the responses of Fouchier and Kawaoka, but the overall message is clear: they continue to dismiss concerns that gain-of-function studies are risky, and provide little in the way of tangible benefits in the form of public health outcomes. I’m hoping for a more thoroughgoing reply from Lipsitch, who earlier this morning expressed some interest in doing so. This is an important debate, and needs to be kept open to a suitable range of voices.

Superflu: reply to Vox

Vox recently ran a piece by Susannah Locke on a new study in Cell, in which scientists showed only five mutations are needed to make H5N1 avian influenza transmissible in ferrets. (I’ve talked a bit about this here.) The research is controversial because it is dual-use: it could be used to advance our understanding of viruses, but it could also be used to create a deadly pandemic. You might remember a similar controversy about H5N1 back in 2011; some of the same players in that saga are involved here. In particular, the group is headed up by Ron Fouchier of Erasmus University.

Dual-use doesn’t get a lot of play in the news, so it is always nice to see some coverage. The article in Vox, however, doesn’t give the full story. In particular, the article doesn’t pay attention to some of the nuances of the 2011-2012 debate that inform—or should inform—thinking on this new research. Vox isn’t taking responses or op-eds from outsiders right now, so instead I’ve made a few notes below.

Stupid, or Simple?

A lot of the trouble back in 2011 was unavoidable: the National Science Advisory Board for Biosecurity (NSABB), for the first time in its history, recommended the partial censorship two scientific papers based on their ability to enable acts of bioterrorism. This move was always going to sit poorly with the life sciences community, where openness is the norm. And, though the NSABB later reversed their recommendation in response to revised copies of the papers, the episode prompted a year-long moratorium on H5N1 research that would lead to similar types of result, as well as new regulations at the NIH over the funding and pursuit of this so-called “gain-of-function” research.

Part of that furor, however, was caused by Ron Fouchier. Fouchier, who claimed at the 2011 European Scientific Working group on Influenza meeting in Malta that his group had created “probably one of the most dangerous viruses you can make” also referred to his work as a “really stupid experiment.” These and other bold claims circulated widely during the debate, and were assuredly a source of stress as researchers, government officials, and journalists sought to work through the issues.

Fouchier’s claims were hardly what caused the NSABB’s recommendation. But it certainly stirred up a storm, and bought about a lot more scrutiny of dual-use than normally occurs. Unsurprisingly, Fouchier stopped using such bold rhetoric. But in doing so he further muddied the waters of the public debate, by excessively playing down the risks of his research.

A great example can be found in Fouchier seeking to qualify his “really stupid” statement:

In his Malta talk, Fouchier called this a “really stupid” approach, a phrase widely interpreted to mean he regretted it. In fact, he says, he just meant that the technique, called passaging, is a simple one compared to the sophistication of creating targeted mutations. The confusion may have stemmed in part from the fact that the Dutch word for “stupid” can also mean “simple.”

So something was possibly lost in translation. Yet Fouchier’s about-face doesn’t mesh well with other claims he’s made about the risks of his research: for example, that “bioterrorists can’t make this virus [that my team created], it’s too complex, you need a lot of expertise.”

He wants to say that the experiment is simple, but also that it’s too complex for bioterrorists. Yet we already know that bioterror, whether it be committed in Oregon or Japan, is not necessarily the purview of the academy. We also know, from history, that having a high technical competence is far from a guarantee moral probity.

Fouchier’s also doesn’t address the risks that come from widespread attempts to reproduce these results. A recent article in Slate illustrated how SARS, foot and mouth, and H1N1 flu have all escaped their labs. We shouldn’t just be concerned about a bioterrorist brewing a batch of superflu, but just the significance of run-of-the-mill laboratory accidents.

How dangerous?

In trying to understand the risks of the H5N1 studies, few things are more contested than the case-fatality rate (CFR) of H5N1: how many people with the disease die, divided by the number of people who get the disease. The WHO lists 650 reported cases of H5N1, with 386 deaths, giving rise a CFR of 60%. That’s a really big number—most influenza pandemics have a CFR of less than 0.1%.

Supporters of the research have insisted that 60% can’t be the true CFR. The reason they give is that there must be more cases we don’t know about; cases that either aren’t reported, or cases where infected individuals don’t display symptoms severe enough to trigger detection. For each case we don’t know about that isn’t fatal, the CFR drops. And if the CFR drops, then gain-of-function research is less of a problem.

Appealing to the CFR, however, runs into two problems. For a start, there simply aren’t—at least as far as we can tell—that many people walking around with H5N1 who aren’t being picked up. A study in 2008 suggested that there may be subclinical infections in Thailand; research in China, however, found that there was little evidence for subclinical infections. An investigation into two villages in Cambodia found that only a small number of individuals tested positive for H5N1: 1% of the sample.

But a bigger problem is that 60% is a huge CFR. “Spanish Flu,” which killed 50–100 million people in 1918, only had a CFR of around 2.5%. Standing a potential human-transmissible H5N1 against Spanish Flu should give people nightmares. But what it means for Fouchier and company is that even if it turned out that for every 1 case of H5N1 we found, we missed 24 more, we’d still cause to be worried.

Power in the platform

This all matters, quite simply, because there is power in the platform. Public debates rise and fall on the data supplied, and who is given the authority to depict a particular debate. Fouchier is an expert; of that there is no doubt. But he is an expert in influenza. Not bioterrorism, not public health, and not dual-use. He’s also been demonstrably confusing when it comes to the ways he presents his research.

It’s troubling, then, that the Vox piece is almost entirely based on Fouchier’s account of what happened—an account which is, as above, sketchy. It’s really good to see dual-use in the news, but on this issue more than one perspective is necessary. The history of dual-use should be written by more than one group.

The Center for Biological Futures over at The Fred Hutchison Cancer Research Center maintains a good repository of documents relating to the H5N1 dual-use case in 2011-2012.

Gain-of-function: redux

The gain-of-function crowd are at it again. A new study, released in Cell, has described how H5N1 highly pathogenic avian influenza can be engineered to become transmissible between mammals, using five mutations. The new research builds off the 2011-2012 work of Ron Fouchier and Yoshihiro Kawaoka; work that caused a ruckus for its potential for misuse, and led to the National Science Advisory Board for Biosecurity (NSABB) initially recommending the censorship of the papers.

The NSABB hasn’t commented on the Cell study; according to Michael Osterholm, the group hasn’t met in 18 months. Near as I can tell, they didn’t meet or we’re consulted about a similar study in H7N1. In both cases, the informal risk assessments pointed to the benefits of the studies outweighing the risks. And that, apparently, was that.

The lack of consultation with the NSABB is concerning. It raises questions about what exactly that board is doing about the concerns that define its existence. It also stomps all over a staple of good ethics and regulation: treat like cases alike. Rather than assuming that the likeness of these publications warrants immediate publication, however, we should remember the history. The NSABB’s decision was controversial, and followed serious deliberation. This should have, at the least, led to these new studies being flagged for review by the NSABB. That the NSABB wasn’t involved is, to me, more problematic than the new publications.

Make no mistake, there will be more research like this: the flu lobby has informed us that this is the case. In the wake of the Tamiflu revelations—that the antiviral stockpiled by governments the world over doesn’t work, and actually causes serious adverse events in children—pressure for gain-of-function research is also likely to increase. As governments scramble to appear to be doing something, I suspect more of these publications will be funded.

But to what end? The arguments haven’t changed since 2011. Predicting viral mutation with an eye to vaccines or surveillance isn’t a good strategy at all, because what we can create in the lab isn’t necessarily what nature will brew up. It is like playing blind darts: you aren’t likely to succeed, and could put someone’s eye out in the process. This isn’t an argument that can be sustained, and requires a misinterpretation of how viruses work to seem convincing.

Moreover, knowing the specific evolutionary pathway of a virus is hardly the most burdensome part of vaccine production. Vaccines need to be created, tested, stored, distributed, and used. These latter stages are incredibly time-consuming and costly. Gain of function research doesn’t add to these steps in a meaningful way.

Put another way, if the flu lobby were some of my former undergraduate chemistry students, I’d be failing them for not being able to identify the limiting reagent in this reaction, much less give me a valid rate equation.

It also detracts from the fact that the politics and logistics of vaccines don’t make them great tools in the event of a pandemic. And without Tamiflu, our best options are the social and political—and ethically fraught—solutions that rely on people and public health. Isolation; quarantine; surveillance. These raise important rights and privacy issues that haven’t been adequately discussed. I don’t think many people are ready for what quarantine means in the event of a serious outbreak.

The malevolent uses of dual-use research of concern, however, become more acute the less prepared we are. Gain-of-function research needs an institution to benefit us; it needs us to be unprepared or complacent to harm us. We’re definitely unprepared right now.

I’m not saying we should censor the new H5N1 or H7N1 research. I’ve made it clear elsewhere that we are not be wrong to do so, but in this case that ship has long since sailed. What we need to do now is have a bigger, better conversation about pandemic preparedness, public health, and dual-use.

NB: Written in transit. I’ll endeavour to have more comprehensive links up when I’m back at my computer.

 

Dual-use and the fatality rate of H5N1

The long-winded “Seroprevalence of Antibodies to Highly Pathogenic Avian Influenza A (H5N1) Virus among Close Contacts Exposed to H5N1 Cases, China, 2005–2008,” came out in PLoSOne this week. It is a good day for people like myself who have concerns about gain-of-function research that seeks to modify— or results in the modification of—highly pathogenic H5N1 avian influenza.

The study’s importance goes back to the controversy in 2011 and 2012 surrounding papers submitted to Science and Nature respectively by Ron Fouchier and Yoshihiro Kawaoka, in which they showed how H5N1 could be modified to transmit between mammals (in this case, ferrets). The papers were identified as cases of dual-use research of concern (DURC):

research that, based on current understanding, can be reasonably anticipated to provide knowledge, products, or technologies that could be directly misapplied by others to pose a threat to public health and safety, agricultural crops and other plants, animals, the environment or materiel (source)

The editors of Science and Nature agreed, initially, to censor the papers at the request of the National Science Advisory Board for Biosecurity. The continuing debate—following the release of modified versions of the papers—turns on a lot of things. Of note, however, is the insistence of virologists such as Morens, Subbarao, and Taubenberger, among others, that:

whatever the case, unless healthy seropositive people detected in seroprevalence studies temporally and geographically associated with H5N1 cases are all falsely seropositive, their addition to exposure denominators greatly decreases case-fatality determinations.

That is, the potential for asymptomatic and undetected H5N1 infections would lead to a far lesser case fatality rate than the current figure, which sits at the staggeringly large 60%. (For context, the 1918 “Spanish” flu that killed 50–100 million people had a case fatality rate of about 2.5%.)

Convincing the NIH, the NSABB, and the public that the H5N1 studies are safe and laudable exercises relies in part on the claim that the 60% figure isn’t all it is cracked up to be.[1] This new study throws weight behind the concern that H5N1 is really as lethal as it seems, and that it is manifestly dangerous to do things like alter its method of transmission, host range, drug resistance and so on (experiments Fouchier now wants to do on H7N9—see here and here).

Downplaying the risks of H5N1 would be just as irresponsible as it would be to claim, for example, that Fouchier’s lab engineered a supervirus;[2] we need to be mindful of the potential for good and bad uses of this research, and acknowledge the contingencies and assumptions upon which our predictions rely. The “DURC-is-safe” group, as Garrett called them today, have relied on problematising the case fatality rate of H5N1. Support for that type of claim is rapidly shrinking.


  1. In point of fact, in the last article I released on this topic, a reviewer attempted to undermine my argument using exactly such a claim. It is a really common point of contention in the literature.  ↩
  2. Which, incidentally, is what Fouchier was getting at when he said it was “probably one of the most dangerous viruses you can make” and that it was a “stupid” experiment. Words he really quickly went back on once he realised, in the words of Gob Bluth that “he’d made a huge mistake” by fear-mongering.  ↩