Tag Archives: dual-use

Dual-use and the fatality rate of H5N1

The long-winded “Seroprevalence of Antibodies to Highly Pathogenic Avian Influenza A (H5N1) Virus among Close Contacts Exposed to H5N1 Cases, China, 2005–2008,” came out in PLoSOne this week. It is a good day for people like myself who have concerns about gain-of-function research that seeks to modify— or results in the modification of—highly pathogenic H5N1 avian influenza.

The study’s importance goes back to the controversy in 2011 and 2012 surrounding papers submitted to Science and Nature respectively by Ron Fouchier and Yoshihiro Kawaoka, in which they showed how H5N1 could be modified to transmit between mammals (in this case, ferrets). The papers were identified as cases of dual-use research of concern (DURC):

research that, based on current understanding, can be reasonably anticipated to provide knowledge, products, or technologies that could be directly misapplied by others to pose a threat to public health and safety, agricultural crops and other plants, animals, the environment or materiel (source)

The editors of Science and Nature agreed, initially, to censor the papers at the request of the National Science Advisory Board for Biosecurity. The continuing debate—following the release of modified versions of the papers—turns on a lot of things. Of note, however, is the insistence of virologists such as Morens, Subbarao, and Taubenberger, among others, that:

whatever the case, unless healthy seropositive people detected in seroprevalence studies temporally and geographically associated with H5N1 cases are all falsely seropositive, their addition to exposure denominators greatly decreases case-fatality determinations.

That is, the potential for asymptomatic and undetected H5N1 infections would lead to a far lesser case fatality rate than the current figure, which sits at the staggeringly large 60%. (For context, the 1918 “Spanish” flu that killed 50–100 million people had a case fatality rate of about 2.5%.)

Convincing the NIH, the NSABB, and the public that the H5N1 studies are safe and laudable exercises relies in part on the claim that the 60% figure isn’t all it is cracked up to be.[1] This new study throws weight behind the concern that H5N1 is really as lethal as it seems, and that it is manifestly dangerous to do things like alter its method of transmission, host range, drug resistance and so on (experiments Fouchier now wants to do on H7N9—see here and here).

Downplaying the risks of H5N1 would be just as irresponsible as it would be to claim, for example, that Fouchier’s lab engineered a supervirus;[2] we need to be mindful of the potential for good and bad uses of this research, and acknowledge the contingencies and assumptions upon which our predictions rely. The “DURC-is-safe” group, as Garrett called them today, have relied on problematising the case fatality rate of H5N1. Support for that type of claim is rapidly shrinking.


  1. In point of fact, in the last article I released on this topic, a reviewer attempted to undermine my argument using exactly such a claim. It is a really common point of contention in the literature.  ↩
  2. Which, incidentally, is what Fouchier was getting at when he said it was “probably one of the most dangerous viruses you can make” and that it was a “stupid” experiment. Words he really quickly went back on once he realised, in the words of Gob Bluth that “he’d made a huge mistake” by fear-mongering.  ↩

Revisiting Dual-Use and Corruption

The US has been busy publishing policies on the regulation of research that has both benevolent and harmful applications, also “dual-use” research. The polices focus on dual-use research of concern (or DURC), a fashionable way of identifying a subset of dual-use research that:

based on current understanding, can be reasonably anticipated to provide knowledge, information, products, or technologies that could be directly misapplied to pose a significant threat with broad potential consequences to public health and safety, agricultural crops and other plants, animals, the environment, materiel, or national security. [USG, p. 2]

Two policies  on DURC showed up at the end of February, the first being the long-winded “Framework for Guiding U.S. Department of Health and Human Services Funding Decisions about Research Proposals with the Potential for Generating Highly Pathogenic Avian Influenza H5N1 Viruses that are Transmissible among Mammals by Respiratory Droplets.” This  framework has been floating around in draft form since December 2012, and I argued in January that we should be concerned—among other things—about corruption.  If research that posed a significant risk to human health was diverted to agencies that pursued classified research, there is a potential for a scenario where classified agencies start pursuing DURC that might be better refused funding or redesigned to produce more benefits with less risks of misuse. Classified dual-use research in the DoE and CIA has a patchy and controversial history, and the aim of policies that purport to regulate DURC, if anything, should be to reduce the amount of potentially harmful research happening, not increase it or move it away from public accountability.

Happily, the gain-of-function framework was amended along lines that mitigate the potential for corruption.  But a related problem has appeared with the release for consultation of the “United States Government Policy for Institutional Oversight of Life Sciences Dual Use Research of Concern,” a broader policy aimed at assisting federal agencies in making decisions about what dual-use research to fund.

When addressing the review process of dual-use research, the policy addresses the question of what types of groups are suitable as review bodies.  This is where the policy should give cause for concern. As it currently stands, review bodies may include:

(1) a committee established for dual use review; (2) an extant committee (such as an Institutional Biosafety Committee [IBC]) … or (3) an externally administered committee (e.g., an IBC or review entity at a neighboring or regional institution or a commercial entity). [USG, p. 11, emphasis mine]

There isn’t anything wrong with an extant or external committees involving themselves in the review of dual-use research on the face of things. But there are certain types of involvement that tend to lead to bad outcomes.  The types of outcomes I’m most concerned with here are those where the stated aims of the policy are corrupted—when the policy .

When it comes to dual-use, we should be careful about the types of entity we let function as committees. Commercial entities that participate in research in areas typically associated with dual-use life sciences research (e.g., virology, synthetic biology) might possess the relevant expert knowledge to make calls on the funding of DURC, but have self-interested reasons in making sure DURC is pursued for publication, profit, or patent.

The policy makes an attempt to address this, requiring that any member of the review entity be recused if they have a direct financial interest, except “to provide specific information requested by the review entity.” But this is really limited as a safeguard, as it still allows for a lot of involvement by those with direct conflicts of interest, as long as the committee is the one doing the asking. It doesn’t cover less direct involvement—one doesn’t need to have direct financial involvement with a piece of research to have a conflict of interest. Finally, the legislation recuses individuals, but not groups.  So while some members of a commercial entity may have direct financial interests and be recused, there is no provision for excusing a commercial entity qua group from functioning as a committee.

Why is this important? Because the history of oversight of research is also a history of conflict of interests and corruption. The last decade has seen a cavalcade of horrors when it comes to the ways that institutions nominally dedicated to the public good are compromised by vested interests.  When pharmaceutical companies can employ researchers as consultants and as researchers on their drugs, conflicts of interest exists.  When the Department of Defence can covertly mill weapons-grade anthrax in the name of a biodefence program without public oversight, conflicts exist. And should companies engaged in or benefiting from DURC be allowed to decide for themselves what types of research should or shouldn’t be funded, conflicts and corruption will occur.

All in all, researchers who study DURC should be excited about some legislation coming out that is more than the existing advisory capacity of earlier modes of review.  But the legislation coming out is worrying—there is the possibility for stacking the review process either in (unjustified) favour of government interests, or private. The debate so far has focussed on the former, but there is also good reason to be suspicious of the latter.