Fetishizing science in a time of Ebola

It didn’t matter to me – I was in it for the science. –GLaDOS.

Science provides us knowledge. But—for most—science and the knowledge it brings isn’t the only or most important thing out there. Modern biomedical ethics is built upon, among other seminal statements, the Declaration of Helsinki, which states that:

While the primary purpose of medical research is to generate new knowledge, this goal can never take precedence over the rights and interests of individual research subjects.

That’s one of the reasons it was so concerning to stumble across a new article on Scientia Salon, brainchild of CUNY Professor of Philosophy Massimo Pigliucci, in which it was argued that anyone who wants to receive an experimental treatment for the Ebola virus must enroll in a placebo-controlled, randomized clinical trial. The author, evolutionary biologist Joanna Monti-Masel, claims that

The unethical behavior here…is not doing an experiment, but doing an experiment without using a control group. There should be no compassionate use exceptions. Everybody who wants these treatments should have to enter a randomized trial to have a chance of getting them.

…At least five patients have received a potentially effective treatment, but nobody has yet been assigned to a control group. This is the ethical travesty, and it needs to stop.

For those catching up, the Ebola outbreak that is believed to have started in December of 2013 has infected 1,975 people, and killed 1,069. Last week, the WHO declared that the outbreak constituted an Public Health Emergency of International Concern; this week, a WHO ethics advisory committee released a statement approving the use of unregistered interventions for Ebola. Kelly Hills and I have written on the ethics of latter part here.

Monti-Masel, however, thinks that the WHO is acting unethically by allowing for “compassionate use,” which the WHO is defining here as access to an unapproved drug outside of a clinical trial. Rather, she argues, anyone who wants access to these new drugs must be part of a clinical trial.

And not just any trial. Monti-Masel believes that the only way we ought to collect data about these experimental interventions is by setting up a double-blind, randomized clinical trial in the middle of an epidemic in West Africa. That is, we offer those suffering from Ebola—which is primarily killing vulnerable members of the nations of Sierra Leone, Guinea, Liberia, and Nigeria—something, which may or may not be a drug that may or may not work, or might be a sugar pill. We don’t tell them which one it is, because we won’t know which it was until after the fact. We then determine who gets better, and which ones get worse. And if that isn’t what the patient wants, then they get nothing.

Science for the sake of science

It is worth noting, first up, that Monti-Masel is reacting to a hypothetical storm in a nonexistent teacup. Aside from the WHO ethics committee’s general recommendations that data collection is necessary, but must be collected and shared ethically and equitably, there’s nothing else to suggest what types of data collection, and study design, might be permissible. Moving immediately to the need for control groups and randomized trials presumes a lot about the WHO’s mild statement, and jumps the gun on a lot of serious ethics that has to happen before—and has to involve the countries that are actually in the middle of this outbreak.

But more importantly, Monti-Masel fetishizes data collection above and beyond any other consideration. The main benefits she cites is the power of the studies that could be achieved through a randomized trial. All other concerns are secondary, it seems, to the possibility of doing accurate—not good—science.

glados_vector_by_pixel5_exe-d4le9p5

But that’s wrongheaded; more, it is paternalistic and racist—to say nothing of deeply myopic—to presume that the central purpose of these interventions ought to be to collect data. Collecting data is necessary; collecting data is good. But the ways in which collect that data, and the quality of data we should aim for, must be subject to the central aim of these interventions: “to try to save the lives of patients and to curb the epidemic.”

Requiring participation in a randomized, placebo-controlled clinical trial in order to access these drugs is deeply paternalistic. It presumes that the priorities defined by Monti-Masel’s hypothetical study are not only the best for the scientist, but the best for everyone. It doesn’t even consider that the people of a country embroiled in an Ebola epidemic might not want to participate in a placebo-controlled trial. Indeed, Monti-Masel claims that compassionate use outside of a clinical trial would be “an ethical travesty” because it would give us lower quality data for lack of a suitable (placebo) control group.

That reduces the patient to a mere data point. Considering the lengthy history of drug testing in African countries that leave the communities in which they are tested worse off, it is not hard to detect the racism implicit in such an extreme study design. History is rife with contributions “to the legacy of black bodies being manipulated and violated for ‘the benefit of all.’” To be ethical, research—even in small samples such as must occur with the limited experimental agents present for this outbreak of Ebola—must take into account practices that respect the agency and priorities of the participant. People are not simply data points; vulnerable populations are not simply a convenient supply of test subjects.

Finally, advocating placebo controls simply misses the ethical forest for the trees. Monti-Masel claims:

One final concern is that many Africans are suspicious of Western doctors and experiments, and that their fears will keep them away. That’s fine, at least for now. That’s what the ethical principle of autonomy is about, crystallized in the notion of informed consent.

Rather, we should simply let those prospective patients know that if they want a chance at accessing an intervention, they have to enroll. If not, they are just out of luck—a move that is, as already discussed, paternalistic and racist. This doesn’t concern Monti-Masel, however:

Plenty of infected Africans will probably refuse to take part in the trial. That’s okay, because we don’t have enough treatments to go around anyway.

This completely misses the point of the “suspicion of Western doctors” about which people are concerned. To know why, ask this question instead: what happens when it becomes clear that Western doctors are only offering the hope of drugs if you’ll sign up to their trials? In an outbreak in which people are already avoiding reporting cases, or hiding loved ones that are infected with the virus; for a disease that, historically, has generated mistrust in Western interventions; what does Monti-Masel think the impact of a placebo-controlled trial will be for the outbreak as a whole? I know there are no RCTs on that topic, but any student of history would be able to tell you that things will go downhill fast. Trust is a vital element in a public health intervention. Jeopardizing that trust in the name of a small trial values the data above the lives of everyone in the outbreak.

This is a disappointing article in Scientia Salon. The submission itself is problematic; the comments are horrific. One hopes that in the future the site will look closely at submissions, in light of the enthusiastic endorsement of enforced autopsies that has resulted from this post. There is power in the platform, and in the middle of an outbreak this is not a productive contribution.

Doing better means asking, not telling

Hills and I have already argued, at length, about the ethics of distributing these drugs. We’ve advocated that those “who have traditionally and unjustly held power over African nations, step back and accept our role as people to provide assistance, rather than determine it.” It isn’t surprising that Monti-Masel’s reply has struck the worst possible chord.

Moreover, there is still good science that we should be doing, in partnership with the countries that are the site of this outbreak. Giving up the fetish of the placebo control group doesn’t mean giving up on trials, or good research. It just recognizes that certain kinds of study are unethical to pursue in the context of a dangerous disease outbreak, among a vulnerable population that are prone to exploitation by Western powers.

At the end of the day, we want science to be commensurate with other important values. Making science—one type of science—the be-all and end-all of our ethical considerations takes the very important role of knowledge in promoting human health, wealth, and security, and perverts it. We do not want to be like GLaDOS, the super-intelligent, passive aggressive antagonist of the Portal franchise.

Our motto should never be:

I’ve experiments to run,

There is research to be done;

On the people who are still alive.

Paternalism, Procedure, Precedent: The Ethics of Using Unproven Therapies in an Ebola Outbreak

The World Health Organization convened an ethics panel on Monday to discuss the ethics of using experimental treatments in the current Ebola outbreak in—so far—Guinea, Liberia, Sierra Leone, and Nigeria. The panel has, thankfully, concluded that “it is ethical to offer unproven interventions with as yet unknown efficacy and adverse effects, as potential treatment or prevention.”

Nonetheless, the reasons for this decision aren’t necessarily clear; statements like the one issued today are typically light on the details. My own experience on social media is that there isn’t a lot of clarity on the range of decisions we might take into account in coming to such a decision, and how we decide what’s important. And—believing, as I do, that ethics is as much a conversation as it is a set of decisions—that leaves open significant potential for misunderstanding and misrepresentation.

Kelly and I wrote the following over the weekend, but were sidetracked in posting it by pesky things like scientific accuracy, and the controversy that followed the announcement that a Spanish priest had also received the Zmapp treatment. We post this, as Kelly so beautifully put it, in the hope that it will be “is useful for answering the questions people who don’t have much background in ethics may have, as well as getting into the cultural zeitgeist for discussions not only about future pandemic situations but also discussions about disparate treatment of people from the Developed vs Developing World.”

[Cross-posted at Life as an Extreme Sport]

A “secret serum.” A vaccine. A cure. A miracle. With the announcement of the use of ZMapp to treat two Americans sick with the Ebola virus with apparently no ill effect, the hum and buzz on social media, commentary websites, and even the 24/7 news cycle, has become one of “should the serum be given to Africa? Will it?” The question has dominated for more than a week, and become something that the World Health Organization feels it needs to address by convening a panel of medical ethics experts to offer an analysis of what should be done.

And the general question about untested cures/vaccines in the event of a disease pandemic is an important one; there are already guidelines for what kind of treatments can and will be made available during a flu pandemic, and it seems quite sensible that a guideline be developed for all potential pandemic pathogens. However, it isn’t a question that is relevant in the current context, because we are already past that.

While people may be stating “should the serum be made available?” that’s not the question being asked.

It isn’t the question being asked, because we already know the answer: yes. In this last week, the serum has been made available—to Kent Brantly and Nancy Writebol. The pair of American health care workers have received the ZMapp serum, which, until this past week, had not been tested on any human subjects. We already know that the answer to whether or not the serum should be made available is “yes”–or at least, “yes, to people like us.” [1]

Instead, more specifically, the implicit (and at times very explicit) question being asked is: “should the serum be made available to the West African countries suffering from the current Ebola outbreak?”

Your instinctive response to this might be “yes! This is clearly a matter of equality and justice; the lives of those people suffering from Ebola in Sierra Leone, Guinea, Liberia, and Nigeria[2] are just as important as the Americans who were given the serum.” If so, good reader, then this essay is not addressed to you. After all, you have clearly reached the same conclusion we have: Why shouldn’t—assuming that Mapp can make good on its claims that it can manufacture sufficient quantities of their serum—those who are suffering have care made available? There are risks, to be sure. But those risks were clearly outweighed by the urgency of the situation in which Brantly and Writebol found themselves, and in which hundreds (if not thousands) of people across the Atlantic find themselves now.

And why not? If it is in our power to help those suffering, we ought to do so. Justice and equality are traditions upon which civilizations are based. Adam Smith, better known for his “invisible hand” of the market, believed that societies are only worthy when they are just. Peter Singer has argued that if we can help those in need at no cost to ourselves, we simply must do so. And Thomas Pogge has, for more then a decade now, argued that the historical injustices faced by countries who were preyed on by the West obligate those living today to assist in supporting the health systems of those countries, and bettering the lives of their citizens.

Indeed, the responses to “why not,” which attempt to justify not sharing a treatment already given to two people, has an uncomfortable relationship to the historical injustices that Pogge references. These objections, often made with good intentions, can be broadly broken down into three categories: paternalism; procedure; and precedent.

Paternalism

With paternalistic objections to proving experimental Ebola treatments to Western African nations in need, it’s common to hear concern about corruption, ability to consent, or even a call-back to previous, unethical testing on vulnerable human subjects. Perhaps the most common of these is the belief that all governments in Africa are corrupt maws that inhale non-governmental organization money and aid to prop up an exclusive elite at the expense of the rest of the country. The history of corruption, from the power voids left by colonial government departures, to the actual problems created by colonialism (complete with unnatural country delineations) are broadly recognized and, importantly, well beyond the scope of a post arguing that it is ethical to allow these countries self-determination.

A more specific ethical concern is that people who are gravely ill are often viewed as being unable to give consent, let alone informed consent. Much of the current outbreak has occurred outside of city areas, in rural and remote areas of the affected countries. There are some valid questions about whether or not people can genuinely understand the risks and benefits of using a completely experimental vaccine if they don’t have anything that easily maps onto the “basic education” system common in the developed world. There is also the fact that when people are gravely ill, they may not be cognizant and aware enough to grant consent, or that their family is so desperate for a cure that they will do anything, regardless of risk.

Neither of these issues, though, means that it is impossible for infected patients or their families to made determinations about what kind of care they would like to receive. An aspect of receiving informed consent is to make sure risk and benefit is described in a way that makes sense and is understandable to the people making the choices; in this, the reverence for autonomy of the individual should be respected, rather than the paternalistic instincts of those in positions of power.

Informed consent also ties into specific and ugly histories involving pharmaceutical testing and African countries. There is a history of many countries in Africa, like Zimbabwe and South Africa, being exploited for AZT trials, experimental hormonal contraceptive, and other drugs and devices. These trials are frequently coercive, and have contributed to the legacy of black bodies being manipulated and violated for “the benefit of all.”

Fears about consent and testing on vulnerable populations are valid; no one is saying to parachute the ZMapp serum down in little chilled coolers for willy-nilly application. Merely that, given the history of testing on vulnerable people and exploiting the populations of African countries, rather than standing in a position of authority saying “let us decide whether or not we will let you make your own decision,” the West should step aside and say “this is your choice.”

The difference here is stark: it’s a matter of forcing an option vs. accepting that the choice is to be made by someone not you (where “you” is “the West”), and that these countries are full-fledged countries who have their own systems and own choices.

Procedure

Others believe that there is an ethical issue in distributing the serum on a broad scale. Some people claim that the outbreak provides the perfect opportunity to conduct a large, randomized control trial (RCT)—the “gold standard” of medical research—of ZMapp. Others believe that such a trial is impossible in the context of a pandemic, and we should thus hold off from making the drug available for lack of ability to monitor the drug’s efficacy.

We believe both positions are false. By fetishizing the gold standard, we—the USA and developed world—miss the point of our role as participants in solving this disease outbreak. Standards of evidence required for drug use are different from country to country, even in the developed world. They are also different in different contexts, and levels of urgency. An RCT would introduce an element of testing to treatment that, given what we know about the justified mistrust that people in developing nations have of the developed world and their experiments, would creating more tensions in countries wary of Western medical interventions. The potential of an RCT to backfire and jeopardize the provision of care makes it unwise—without changing our conclusion that assistance must be rendered.

But the lack of wisdom in conducting an RCT doesn’t mean we should just throw up our hands and not assist. If the countries responsible for managing the outbreak thought it opportune, a large cohort study could be done where everyone who is able receives treatment, and the results are compared to the history of the virus over time. This might not allow researchers to control every factor, but it would go a long way to showing the efficacy of ZMapp the next time Ebola surfaces.

Talk of trials, however, misses the point of our role in participation in a more fundamental way. If you aid someone, you do so for their benefit on their terms. We don’t want to mandate data collection “for the good of West Africa.” We should ask the decision makers inside Guinea, Sierra Leone, Nigeria, and Liberia how we can best collect data with them, for them and their benefit as they see it.

The WHO has developed frameworks for this in other disease contexts, such as pandemic influenza.[3] While these need not define the scope of cooperation, the past can serve to guide deliberation on how best to assist. The WHO’s policies, by virtue of being reached through international consensus, gives extra weight to the global nature of this cooperation.

Precedent

The WHO can play an extra role in countering the paternalism of intervention by developed nations, while beginning the process to set a good precedent for future outbreaks. Jeremy Farrar, David Heynmann, and Peter Piot have argued that:

“The [WHO] could assist African countries with developing rigorous protocols for the use and study of experimental approaches to treatment and prevention, while coordinating more traditional containment measures. As the only body with the necessary international authority, it must take on this greater leadership role.” [5; emphasis added]

The capacity of the WHO to provide oversight, while still enabling African countries to enact their own protocols and treatment plans, is our best chance to address concerns over oversight of the outbreak. There is no denying that this outbreak is international, and without proper management more countries could see cases of Ebola arise. An international body is better equipped to do this with legitimacy, and without paternalism, than any one country alone. The recent declaration by the WHO that the Ebola outbreak is a Public Health Emergency of International Concern, and their accompanying recommendations, is the perfect backdrop for this type of action.

Compassion, or Colonialism

We have to ask ourselves if we want to continue inflicting the wounds of colonialism on African nations. Insisting that Americans are a “special case” when it comes to a disease with a current CFR of 56% only underscores a persistent mistrust: that people in the West get better treatment and care, because those lives are valued more than the lives in developing parts of the world. This fear can be seen in previous Ebola outbreaks: During the 2000-2001 Ugandan outbreak of Sudan ebolavirus, an existing pervasive belief that Euro-Americans visit Central Africa to harvest body parts for profit was amplified by infectious disease control efforts, resulting in infected individuals and their family running and hiding from medical treatment, magnifying the extent of the outbreak. Many local people during the 2002 outbreak of Zaire ebolavirus in the border areas of Gabon and the Republic of Congo believed that Ebola was a disease invented by the French to eliminate African populations (allowing the French unfettered access to their lands and materials).[4] Currently, people in Liberia are already asking why, if there is no cure for Ebola (the standard response on the ground), the Americans are being cured. [5]

The choice to use the experimental vaccine was already made; that genie is well and truly out of the bottle. The question you have to ask yourself is this: can you live with supporting the idea that the lives of Brantly and Writebol are more important than the life of Shiekh Umarr Kahn, the only virologist in Sierra Leone? What about Dr. Samuel Brisbane, the chief medical officer of one of Liberia’s medical centers? How about the other approximately 800 people spread between Nigeria, Sierra Leone, Guinea, and Liberia, all of whom are certainly valued by the people within their lives.

Is this an appeal to emotion? Certainly. Sometimes decisions about what is moral, ethical, right, requires seeing the people you’re talking about as people, rather than numbers and statistics on the other side of an ocean. But that emotion can and should build into our conception of ethics and justice. Returning to the legacy of Adam Smith, “the relief of misery for its own sake is an impulse whose justification is a core intuition…of any plausible theory of moral thought.”[6]

We know that Ebola is a disease of missing infrastructure, poverty, and minimal health care systems. No one is suggesting that these countries not be given the help that they are asking for, in containing the disease, in implementing public health strategies, or in having access to any experimental cures and vaccines available. No one denies that there are ethical issues at stake. Nor are we stating a belief that ZMapp–or Tekmira, or any other unproven intervention–will stop the current outbreak. What we are advocating is merely that we, who have traditionally and unjustly held power over African nations, step back and accept our role as people to provide assistance, rather than determine it.

Kelly Hills and Nicholas Evans

[1]: While some light debate may have questioned whether or not Brantly and Writebol had the ability to consent, there has never been a serious question here in the USA of not giving them the serum.

[2]: Not “Africa.”

[3]: World Health Organization. Pandemic influenza preparedness Framework for the sharing of influenza viruses and access to vaccines and other benefits. Geneva, 2011. http://www.who.int/influenza/resources/pip_framework/en/ Accessed 8 August 2014.

[4] Hewlett BS, Hewlett BL. “Ebola, Culture and Politics: The Anthropology of an Emerging Disease,” pp 57;77. Thomson Wadsworth; Belmont, California; 2008.

[5] Farrar J, Heymann D, Piot P. Experimental Medicine in a Time of Ebola. Published 6 August 2014: http://online.wsj.com/articles/experimental-medicine-and-african-ebola-1407258551 Accessed 7 August 2014.

[6] Campbell, T. “Poverty as a Violation of Human Rights: Inhumanity, or Injustice?” in Pogge, T., Freedom from Poverty as a Human Right: Who Owes What to the Very Poor? (Oxford, UK: Oxford University Press, 2007)

A Risk-Benefit Analysis is not a Death Sentence

As is stated by Marc Lipsitch on the Cambridge Working Group site, the CWG reflects a consensus. My personal views do not reflect the views of the group. When you build a consensus, you often don’t end up with everything you wanted. When a group of very different people forms around a common issue, the outcomes that get devised are heavily moderated by the competing priorities and backgrounds of the participants. Sometimes that leads to a stagnation.[1] Other times, it leads to a more reasonable and practical set of priorities. In the case of the Cambridge Working Group, in which I participated as a founding member last month, our Consensus Statement on the Creation of Potential Pandemic Pathogens (PPPs) was the product of deliberation on the types of steps the eighteen founding members could agree on. For those of you who are just arriving, PPP studies involve the creation of a novel pathogen that could, if released, cause a disease pandemic. In my line of work, PPP studies are a type of “gain of function” study, and associated with dual-use research—scientific research that can be used to benefit or harm humanity. When it comes to PPP studies, the CWG stated one ultimate goal:

Experiments involving the creation of potential pandemic pathogens should be curtailed until there has been a quantitative, objective and credible assessment of the risks, potential benefits, and opportunities for risk mitigation, as well as comparison against safer experimental approaches.

And one proximate goal in the pursuit of that ultimate goal:

A modern version of the Asilomar process, which engaged scientists in proposing rules to manage research on recombinant DNA, could be a starting point to identify the best approaches to achieve the global public health goals of defeating pandemic disease and assuring the highest level of safety.

In short, we want to ask a question: what are the risks and benefits of PPP studies? To ask that question, we want to convene a meeting. And though we’ve no ability to stop them, we’d really like it if scientists could just, I don’t know, not make any new and improved strains of influenza before we have that meeting. Simple, right? Well, I thought so. Which is why I was surprised when colleague said this:

Wait what?!

Hyperbole is Not Helping

NewProf is right: *if* we shut down (all) BSL-3/4 labs, there would be nowhere (safe) for people to work on dangerous pathogens like Ebola, or train new people to do the same. The only problem is that no-one—that I know of—is saying that.

First: the CWG statement says nothing about shutting down laboratories. As a consensus statement, it is necessarily limited by pragmatic considerations. The CWG calls for a risk assessment. It calls for collecting data. That data collection is focused on PPP studies, and primarily in the context of influenza research. Even if the CWG were to be looking at Ebola, PPP studies would (I really, really hope) be a very small subset of Ebola research. Of course, NewProd is not concerned only about individual research, but whole labs:

That is, NewProf claims that a CWG-inspired risk assessment would lead to labs shutting down, which would lead to there being “no scientists trained to study/treat/find cures for Ebola.” But that’s equally ludicrous. A risk assessment of a small set of experiments would be unlikely to result in an entire field being unable to perform. In fact, that would be a really bad thing. The risk of that bad thing would—ought—to be something that informs the risk-benefit analysis of research in the life sciences. Regulation that unduly limits the progress of genuinely (or even plausibly) beneficial research, without providing any additional benefit, would be bad regulation.

Grind Your Axe on Your Own Time

What is most frustrating, however, is how mercenary the whole thing feels. If you are concerned about the Ebola virus, you should be concerned that the public health effort to stem the tide of the virus in West Africa is failing. That a combination of poverty, civil unrest, environmental degradation,, failing healthcare, traditional practices, and a (historically justified) mistrust of western healthcare workers is again the perfect breeding ground for the Ebola virus. You shouldn’t be concerned about a risk-benefit analysis that has been advocated for a particular subset of scientific experiments—with a focus on influenza—that may or may not lead to some outcome in the future. Dual-use research and the Ebola virus, right now, have very little to do with each other. If there comes a time where researchers decide they want to augment the already fearsome pathology caused by the virus with, say, a new and improved transmission mechanism, we should definitely have a discussion about that. That, I think it is uncontroversial to say, would probably be a very bad idea.

A Personal View of Moving Forward

I’ve been present the last few days talking about Ebola, primarily on Twitter (and on other platforms whenever someone asks). I’ve not had a lot of time to talk about the CWG’s statement, or my views on the types of questions we need to ask in putting together a comprehensive picture of the types of risks and benefits posed by PPP studies. So here’s a few thoughts, because it is apparently weighing on people’s minds quite heavily. I don’t know how many high-containment labs are needed to study the things we need to study in order to improve public health. I know Richard Ebright, in the recent Congressional Subcommittee Hearing on the CDC Anthrax Lab “incident” mentioned a figure of 50, but I don’t know of the basis on which he made that claim. As such, I, personally, wouldn’t back such a number without more information. I do know that the question of risk and benefits of PPP studies—and other dual-use research—has been a decade in the making. The purported benefits to health and welfare of gain-of-function research, time and again, fail to meet scrutiny Something needs to happen. The next step is an empirical, multi-disciplinary analysis of the benefits and risks of the research. It has to be empirical because we need to ground policy in rigorous evidence. It has to be multi-disciplinary because first, the question itself can’t be answered by one group; second, the values into which we are inquiring cover more than one set of interests. That, as I understand it, is what the CWG is moving towards. That’s certainly why I put my name to the Consensus Statement. I’m coming into that risk-assessment process looking for an answer, not presuming one. I’m not looking to undermine any single field of research wholesale. And frankly, I find the use of the current tragedy in West Africa as an argumentative tool pretty distasteful.


  1. The twists and turns of consensus-building are playing out on a grand scale at the current experts meeting of the Biological and Toxins Weapons Convention in Geneva. My colleagues are participating as academics and members of NGOs at the meeting, and you can follow them at #BWCMX. And yes, I’m terribly sad to not be there. Next time, folks.  ↩

National Security and Bioethics: A Reading List

Forty-two years ago, in July 1972, the Tuskegee syphilis study was reported in the Washington Star and New York Times. Yesterday, a twitter chat hosted by TheDarkSci/National Science and Technology News Service, featuring Professors Ruha Benjamin and Alfiee M. Breland-Noble, discussed bioethics and lingering concerns about medical mistrust in the African-American Community. It was an excellent event, and you can read back through it here.[1]

Late in the chat, Marissa Evans expressed a desire to know some more about bioethics and bioterror, and I offered to post some links to engaging books on the topic.

The big problem is that there aren’t that many books that specifically deal with bioterrorism and bioethics. There are a lot of amazing books in peace studies, political science, international relations, history, and sociology on bioterorism. Insofar as these fields intersect with—and do—bioethics, they are excellent things to read. But a bioethics-specific, bioterror-centric book is a lot rarer.

As such, the readings provided are those that ground the reader in issues that are important to understanding bioterrorism from a bioethical perspective. These include ethical issues involving national security and scientific research, dangerous experiments with vulnerable populations, and the ethics of defending against the threat of bioterror.

The Plutonium Files: America’s Secret Medical Experiments in the Cold War. If you read one book on the way that national security, science, and vulnerable people do not mix, read Eileen Welsome’s account of the so-called “Human Radiation Experiments.” Read about dozens of experiments pursued on African Americans, pregnant women, children with disabilities, and more, in the name of understanding the biological properties of plutonium—the fuel behind atomic bombs. All done behind the great screen of the the Atomic Energy Act, because of plutonium’s status as the key to atomic weapons.

Undue Risk: Secret State Experiments on Humans. A book by my current boss, Jonathan D. Moreno, that covers some of the pivotal moments in state experimentation on human beings. The majority of the cases Moreno covers are those pursued in the interests of national security. Particularly in the context of the Cold War, there was a perceived urgent need to marshall basic science in aid of national security. What happened behind the curtain of classification in the name of that security, however, was grim.

Biohazard: The Chilling True Story of the Largest Covert Biological Weapons Program in the World–Told from Inside by the Man Who Ran It. Ken Alibek is hardly what you’d call a reliable narrator; then again, I can’t imagine what being part of a crack Soviet bioweaponeer unit would do to a person.[2] Nonetheless, it is a foundational read in the immense bioweapons enterprise that was built from the 1970s till the end of the Cold War.

Innovation, Dual Use, and Security: Managing the Risks of Emerging Biological and Chemical Technologies The late Jonathan B. Tucker released this edited volume in 2012; while the debate about dual-use in the life sciences has progressed since then, it is still one of the most thoughtful pieces on the topic of bioterrorism, biological warfare, and the governance of the life sciences out there. It is also accessible in a way that policy documents tend not to be. That’s significant, as the book is a policy document: it started out as a report for the Defense Threat Reducation Agency.

This list could be very long, but if I were to pick out a selection of books that I consider essential to my work, these would be among the top of the list.

As an addendum, an argument emerged on the back of the NSTNS chat about whether science is “good.” That’s a huge topic, but is really important for anyone interested in Science, Technology, Engineering and Mathematics and their intersection with politics and power. As I stated yesterday on Twitter, however, understanding whether “science is good” requires understanding what the “science” bit means. That’s not altogether straightforward.

Giving a recommendation on that issue involves stepping into a large and relatively bitter professional battle. Nonetheless, my first recommendation is always Phillip Kitcher’s Science, Truth, and Democracy. Kitcher carefully constructs a model of where agents interact with scientific methods and tools, and so identifies how we should make ethical judgements about scientific research. I don’t think he gets everything right, but that’s kind of a given in philosophy.

So, thousands of pages of reading. You’re welcome, Internet. There will be a test on Monday.


  1. I’ll update later with a link to a Storify that I believe is currently being built around the event.  ↩
  2. Well, I can. It is called “Ken Alibek.”  ↩

That Facebook Study: Update

UPDATE 30 June 2014, 8:00pm ET: Since posting this, Cornell has updated their press release to state that the Army did not fund the Facebook study. Moreover, Cornell has released a statement clarifying that their IRB

concluded that [the authors from Cornell were] not directly engaged in human research and that no review by the Cornell Human Research Protection Program was required.

Where this leaves the study, I’m not sure. But clearly something is amiss: we’re still sans ethical oversight, but now with added misinformation.

 ***

So there’s a lot of news flying around at the moment about the study “Experimental evidence of massive-scale emotional contagion through social networks,” also known as That Facebook Study. Questions are being asked about the ethics of the study; while I want to post a bit more on that issue later, a couple of facts for those following along.

Chris Levesque pointed me to a Cornell University press release noting that the study in question received funding from the US Army Research Office. That means the study did receive federal funding; receipt of federal funding comes with a requirement of ethics oversight, and compliance with the Common Rule. It is also worth noting that the US Army Research Office has their own guidelines for research involving human subjects:

Research using human subjects may not begin until  the U.S. Army Surgeon General’s Human Subjects Research Review  Board (HSRRB) approves the protocol [Article 13, Agency Specific Requirements]

and

Unless otherwise provided for in this grant, the recipient is expressly forbidden to use or subcontract or subgrant for the use of human subjects in any manner whatsoever [Article 30, "General Terms and Conditions for Grant Awards to For-Profit Organizations"]

***

I’ve also been in touch with Susan Fiske, the editor of the study. Apparently, the Institutional Review Board (IRB) that approved the work is Cornell’s IRB. That IRB found the study to be ethical:

on the grounds that Facebook filters user news feeds all the time, per the user agreement. Thus, it fits everyday experiences for users, even if they do not often consider the nature of Facebook’s systematic interventions. The Cornell IRB considered it a pre-existing dataset because [Facebook] continually creates these interventions, as allowed by the user agreement (Personal Communication, Fiske, 2014).*

So, there’s some clarification.

Still, I can’t buy the Cornell IRB’s justification, at least on Fiske’s recounting. Manipulating a user’s timeline with the express purpose of changing the user’s mental state is, to me, a far cry from business as usual. Moreover, I’m really hesitant to call an updating Facebook feed a “pre-existing dataset.” Finally, better people than I have talked about the lack of justification the Facebook user agreement provides.

This information, I hope, clarifies a couple of outstanding issues in the debate so far. Personally, I’d still like to see a lot more information about the kind of oversight this study received, and more details on the Cornell IRB’s analysis.

* Professor Fiske gave her consent to be quoted in this post.

Lipsitch and Galvani Push Back

COMMENTARY: The case against ‘gain-of-function’ experiments: A reply to Fouchier & Kawaoka

Over at CIDRAP, Marc Lipsitch and Alison P. Galvani have responded to critics— specifically, Ron Fouchier and Yoshihiro Kawaoka—of their recent study in PLoS Medicine. It is a thorough rebuttal of the offhand dismissal that Lipsitch and Galvani have met from Fouchier and Kawaoka and the virology community more generally.

This is a fantastic addition to the dual-use debate. Too often, stock answers given for the benefits of dual-use are put forward without sustained analysis: things like “will help us make new vaccines,” “will help us with disease surveillance,” or “will raise awareness.” Lipsitch and Galvani have drawn up roadmap of challenges that advocates of gain-of-function studies—specifically those that deal with influenza—must confront in order to the justify public health benefit of their work. We should hold researchers and funding agencies accountable to this kind of burden of proof when it comes to dual-use research.

Dual-use flow chart. Logical structure of the potential lifesaving benefits of PPP experiments, required intermediate steps to achieve those benefits (blue boxes), and key obstacles to achieving those steps highlighted in our original paper (red text). Courtesy Marc Lipsitch, 2014.

Dual-use flow chart. Logical structure of the potential lifesaving benefits of PPP experiments, required intermediate steps to achieve those benefits (blue boxes), and key obstacles to achieving those steps highlighted in our original paper (red text). Courtesy Marc Lipsitch, 2014.

Lipsitch and Galvani’s response is also important because it critically addresses the narrative that Fouchier and Kawaoka have woven around their research. This narrative has been bolstered by the researcher’s expertise in virology, but doesn’t meet the standards of biosecurity, science policy, public health, or bioethics analysis. It’s good to see Lipsitch and Galvani push back, and point to inconsistencies in the type of authority that Fouchier and Kawaoka wield.

UPDATE 06/19/14, 16:32: as I posted this, it occurred to me that the diagram Lipsitch and Galvani provide, while useful, is incomplete. That is, Lipsitch and Galvani have—correctly, I believe—illustrated the problems dual-use advocates must respond in the domain the authors occupy. These are challenges in fields like virology, biology, and epidemiology.

There are other challenges, however, that we could add to this diagram—public health and bioethical, for a start. It’d be a great, interdisciplinary activity to visualize a more complete ecosystem of challenges that face dual-use research, with an eye to presenting avenues forward that address multiple and conflicting perspectives.

How not to critique: a case study

The original title for this piece was “How not to critique in bioethics,” but Kelly pointed out that this episode of TWiV is a case study in how not to go about critiquing anything.

Last Monday I was drawn into a conversation/angry rant about an article by Lynn C. Klotz and Edward J. Sylvester, that appeared in the Bulletin of the Atomic Scientists…in 2012. After briefly forgetting one of the cardinal rules of the internet—check the date stamp— I realized the error of my ways, and started to inquire with my fellow ranters, in particular Matt Freiman, about why a 2012 article suddenly had virologists up in arms.

Turns out that the Bulletin article was cited by a study on dual-use authored by Marc Lipsitch and Alison P. Galvani; a study that was the subject of a recent post of mine . The Bulletin article draws from a working paper where the provide an estimate for the number of laboratory accidents involving dangerous pathogens we should expect as a function of hours spent in the laboratory. Lipsitch and Galvani use this figure in their analysis of potential pandemic pathogens (PPPs).

Freiman joined Vincent Racaniello, Dickson Despommier, Alan Dove, and Kathy Spindler on This Week in Virology (TWiV) on June 1 to talk about (among other things) Lipsitch and Galvani’s research. What followed is a case study in how not to critique a paper; the hosts served up a platter of incorrect statements, bad reasoning, and some all-out personal attacks.

I’d started writing a blow-by-blow account of the entire segment, but that quickly mushroomed into 5,000-odd words. There is simply too much to talk about—all of it bad. So there’s a draft of a paper on the importance of good science communication on my desk now, that I’ll submit to a journal in the near future.Instead, I’m going to pick up just one particular aspect of the segment that I feel demonstrates the character of TWiV’s critique.

“It’s a bad opinion; that’s my view.”

Despommier, at 58:30 of the podcast, takes issue with this sentence in the PLoS Medicine paper:

The H1N1 influenza strain responsible for significant morbidity and mortality around the world from 1977 to 2009 is thought to have originated from a laboratory accident.

The problem, according to Despommier, is that “thought to have originated” apparently sounds so vague as to be meaningless. This leads to a rousing pile-on conversation in which Despommier claims that he could have just easily claimed that the 1977 flu came from Middle Eastern Respiratory Syndrome because “he thought it;” he also claims that on the basis of this sentence alone he’d have rejected the article from publication. Finally, he dismisses the citation given in the article as unreliable because it is a review article,[1] and “you can say anything in a review article.”

At the same time, Dove notes that “when you’re on the editorial board of the journal you can avoid [having your paper rejected].” The implication here is that Lipsitch, as a member of the editorial board of PLoS Medicine, must have used that position to get his article to print despite the alleged inaccuracy that has Despommier so riled up. Racaniello notes that “[statements like this are] often done in this opinion–” his full meaning is interrupted by Despommier. It’s a common theme throughout the podcast, though, that Lipsitch and Galvani’s article is mere “opinion,” and thus invalid.

Facts first

If he’d done his homework, Despommier would have noted that the review article cited by Lipsitch and Galvani doesn’t mention a lab. What it does say is:

There is no evidence for integration of influenza genetic material into the host genome, leaving the most likely explanation that in 1977 the H1N1 virus was reintroduced to humans from a frozen source.[2]

So Lipsitch and Galvani do make an apparent leap from “frozen source” to “lab freezer.” Despommier doesn’t pick that up. If he had, however, it would have given us pause about whether or not is a valid move to jump from “frozen source” to “laboratory freezer.”

Not a long pause, however; there are other sources that argue that the 1977 strain is likely to have been a laboratory.[3] The other alternative—that the virus survived in Siberian lake ice—was put forward in a 2006 paper (note, after the publication of the review article used by Lipsitch and Galvani), but that paper was found to be methodologically flawed.[4] Laboratory release remains the most plausible answer to date.

The belief that the 1977 flu originated from frozen laboratory sources is widely held. Even Racaniello—at least, in 2009—holds this view. Racaniello argued that of multiple theories about the origin of the 1977 virus, “only one was compelling”:

…it is possible that the 1950 H1N1 influenza virus was truly frozen in nature or elsewhere and that such a strain was only recently introduced into man.

The suggestion is clear: the virus was frozen in a laboratory freezer since 1950, and was released, either by intent or accident, in 1977. This possibility has been denied by Chinese and Russian scientists, but remains to this day the only scientifically plausible explanation.

So no, there is no smoking gun that confirms, with absolutely unwavering certainty, that the 1977 flu emerged from a lab. But there is evidence: this is far from an “opinion,” and is far from simply making up a story for the sake of an argument. Lipsitch and Galvani were right to write “…it is thought,” because a plausible answer doesn’t make for unshakeable proof—but their claim stands on the existing literature.

Science and policy

The idea that Lipsitch and Galvani’s piece is somehow merely “opinion” is a hallmark of the discussion in TWiV. Never mind that the piece was an externally peer-reviewed, noncommissioned piece of work.[5] As far as TWiV is concerned, it seems that if it isn’t Science, it doesn’t count. Everything else is mere opinion.

But that isn’t how ethics, or policy, works. In ethics we construct arguments, argue about the interpretation of facts and values, and use that to guide action. With rare exception, few believe that we can draw conclusions about what we ought to do straight from an experiment.

In policy, we have to set regulations and guidelines with the information at hand—a policy that waits for unshakeable proof is a policy that never makes it to committee. Is there some question about the true nature of the 1977 flu, or the risks of outbreaks resulting from BSL–3 laboratory safety? You bet there is. We should continue to do research on these issues. We also have to make a decision, and the level of certainty the TWiV hosts seem to desire isn’t plausible.

Authority and Responsibility

This podcast was irresponsible. The hosts, in their haste to pan Lipsitch and Galvani’s work, overstated their case and then some. Dove also accused Lipsitch of research misconduct. I’m not sure what the rest of the editors at PLoS Medicine think of the claim—passive aggressive as it was—that one of their colleagues may have corrupted the review process, but I’d love to find out.

The podcast is also deeply unethical, because of the power in the platform. Racaniello, in 2010, wrote:

Who listens to TWiV? Five to ten thousand people download each episode, including high school, college, and graduate students, medical students, post-docs, professors in many fields, information technology professionals, health care physicians, nurses, emergency medicine technicians, and nonprofessionals: sanitation workers, painters, and laborers from all over the world.[6]

What that number looks like in 2014, I have no idea. I do know, however, that a 5,000–10,000 person listenership, from a decorated virologist and his equally prestigious colleagues, is a pretty decent haul. That doesn’t include, mind you, the people who read Racaniello’s blog, articles, or textbook; who listen to the other podcasts in the TWiV family, or follow the other hosts in other fora.

These people have authority, by virtue of their positions, affiliations, exposure, and followings. The hosts of TWiV have failed to discharge their authority with any kind of responsibility.[7] I know the TWiV format is designed to be “informal,” but there’s a marked difference between being informal, and being unprofessional.

Scientists should—must—be part of conversation about dual-use, as with other important ethical and scientific issues. Nothing here is intended to suggest otherwise. Scientists do, however, have to exercise their speech and conduct responsibly. This should be an example of what not to do.

Final Notes

I want to finish with a comment on two acts that don’t feature in Despommier’s comments and what followed, but are absolutely vital to note. The first is that during the podcast, the paper by Lipsitch and Galvani is frequently referred to as “his” paper. Not “their” paper. Apparently recognizing the second—female—author isn’t a priority for the hosts or guests.

Also, Dove and others have used Do Not Link (“link without improving ”their“ search engine position”) on the TWiV website for both the paper by Lipsitch and Galvani, and supporting materials. So not only do the hosts and guests of the show feel that the paper without merit; they believe that to the point that they’d deny the authors—and the journal—traffic. Personally, I think that’s obscenely petty, but I’ll leave that for a later post.

Science needs critique to function. Critique can be heated—justifiably so. But it also needs to be accurate. This podcast is a textbook example of how not to mount a critique.


  1. Webster, Robert G, William J Bean, Owen T Gorman, Thomas M Chambers, and Yoshihiro Kawaoka. 1992. “Evolution and Ecology of Influenza A Viruses” Microbiological Reviews 56 (1). Am Soc Microbiol: 152–79.  ↩
  2. ibid., p.171.  ↩
  3. Ennis, Francis A. 1978. “Influenza a Viruses: Shaking Out Our Shibboleths.” Nature 274 (5669): 309–10. doi:10.1038/274309b0; Nakajima, Katsuhisa, Ulrich Desselberger, and Peter Palese. 1978. “Recent Human Influenza a (H1N1) Viruses Are Closely Related Genetically to Strains Isolated in 1950.” Nature 274 (5669): 334–39. doi:10.1038/274334a0; Wertheim, Joel O. 2010. “The Re-Emergence of H1N1 Influenza Virus in 1977: a Cautionary Tale for Estimating Divergence Times Using Biologically Unrealistic Sampling Dates.” PLoS One 5 (6). Public Library of Science: e11184. doi:10.1371/journal.pone.0011184; Zimmer, Shanta M, and Donald S Burke. 2009. “Historical Perspective — Emergence of Influenza a (H1N1) Viruses.” New England Journal of Medicine 361 (3): 279–85. doi:10.1056/NEJMra0904322.  ↩
  4. Worobey, M. 2008. “Phylogenetic Evidence Against Evolutionary Stasis and Natural Abiotic Reservoirs of Influenza a Virus.” Journal of Virology 82 (7): 3769–74. doi:10.1128/JVI.02207–07; Zhang, G, D Shoham, D Gilichinsky, and S Davydov. 2007. “Erratum: Evidence of Influenza a Virus RNA in Siberian Lake Ice.” Journal of Virology 81 (5): 2538; Zhang, G, D Shoham, D Gilichinsky, S Davydov, J D Castello, and S O Rogers. 2006. “Evidence of Influenza a Virus RNA in Siberian Lake Ice.” Journal of Virology 80 (24): 12229–35. doi:10.1128/JVI.00986–06.  ↩
  5. I’m aware that peer review is not sufficient to make a work reliable, but absent evidence that the review process was somehow corrupt or deficient, it’s a far cry from mere opinion.
  6. Racaniello, Vincent R. 2010. “Social Media and Microbiology Education.” PLoS Pathogens 6 (10). Public Library of Science: e1001095.  ↩
  7. Evans, Nicholas G. 2010. “Speak No Evil: Scientists, Responsibility, and the Public Understanding of Science.” NanoEthics 4 (3): 215–20. doi:10.1007/s11569–010–0101-z.  ↩