Category Archives: Bioethics

Treating like cases alike: bioethics edition

Crash course in philosophy—treat like cases alike.[1]

That means that if you think a much-hyped article on the health benefits of chocolate is unethical, your reasons for this conclusion ought to apply consistently across other similar cases. Put another way, if you think John Bohannon acted unethically in publishing a study that claimed to show a relationship between the success of weight-loss diets and chocolate consumption, fooling the science journalism world; and if your reasons for thinking this bear on relevantly similar cases, you ought to arrive at similar conclusions about the ethics of those cases as you do about Bohannon.

1) For example, you might think—as I do—that whether or not the study was intended as a hoax, some human subjects protection ought to be enforced. You might worry that there’s no mention of research ethics review or informed consent in the journal article that caused the furor, or any of Bohannon’s writings since. Kelly asked Bohannon on io9 what kind of approval he got for the study, if any. I’ll update if anything comes of that.

But if you do have this concern, then such reasoning should lead you to treat this study conducted by Facebook with similar suspicion. We know that no IRB approved the protocol for that study. We know that people didn’t give meaningful consent.[2]

2) Say you are worried about the consequences of a false study being reported widely by the media. It is true that when studies are overhyped, or falsely reported, all kinds of harm can result.[3] One could easily imagine vulnerable people deceived by the hype.

But if you think that, consider that in 2013, 23andMe was marketing a set of diagnostics for the genetic markers of disease in the absence of any demonstrated analytic or clinical validity. They were selling test kits with no proof that the tests could be interpreted in a meaningful way.

I think that the impact of the latter is actually much greater than the former. One is a journalist with a penchant and reputation for playing tricks on scientific journals. The other is a billion-dollar, Google “we eat DARPA projects for breakfast” backed, industry leader.

But if Bohannon’s actions do meet your threshold for what constitutes an unacceptable risk to others, you are committed to concerns about any and all studies that could harm distant others that reach that threshold.


If you think Bohannon was out of line, that Facebook was unethical in avoiding oversight, and it was inappropriate for 23andMe to flaunt FDA regulations and market their genetic test kits without demonstrating their validity, then you are consistent.

If you don’t, that’s not necessarily a problem. The burden of proof, though, is now on you to show either a) that you hold one of the above reasons, but the cases are necessarily different; or b) that the reasons you think Bohannon is out of line are sufficiently unique that they don’t apply in the other cases.

Personally? I think the underlying motivation behind a lot of the anger I see online is that scientists, and science communicators, don’t like being made out as fools. That’s understandable. But if that’s the reason, then I don’t think Bohannon is at fault. Journals have an obligation to thoroughly review articles, and Bohannon is notorious for demonstrating just how many predatory journals there are in the scientific landscape today. Journalists have a responsibility to do their jobs and critically examine the work on which they report.

Treat like cases alike, and use this moment for what it should be: a chance to reflect on our reasons for making moral judgement, and our commitment to promoting ethical science.


  1. Like anything in philosophy, there are subtleties to this ↩
  2. Don’t you even try to tell me that a EULA is adequate consent for scientific research. Yes, you. You know who you are. I’m watching you.  ↩
  3. Fun fact: that was the subject of my first ever article!  ↩

Comments on the NSABB Meeting on Gain of Function

On May 5, 2015, the National Science Advisory Board for Biosecurity held a meeting to review the gain-of-function deliberative process, and solicit feedback on their draft framework on the process (published April 6).

As part of that meeting, I am presenting public comment on the ethics of the deliberative process. A copy of the handout I provided to the members of the NSABB—updated to correct a couple of typographical errors—is available here.

You can also view the webcast of my comments live. I am not sure when I’ll be speaking—the public comment sessions are planned for 2:00pm-2:30pm, and again at 3:30pm-3:50pm. However, if you want to watch me give comment (or the rest of the meeting) the webcast is available here.

Comments on That CRISPR Paper

If you’ve got a pulse and are interested in biology, you’ve probably heard that a team of scientists have reported successfully (well, kinda) conducting germ line editing on human embryos using the CRISPR-Cas9 technique. I started hearing rumors of this study around the time that a moratorium on germ line experiments in humans was being proposed by some Very Big Deals. With confirmation that the study is real, the bioethics and life sciences worlds are all a-twitter (somewhat literally).

There’s an ugly side to the current furor, and a lot of it has to do with the nationality of the research team. Apparently the fact that Chinese researchers conducted the study has given people cause for alarm. That there is straight up racism, seasoned liberally with some vintage Cold War nonsense; Kelly has gone over this in a lot more detail. I won’t say any more on this, except to remind that people that when I teach about unethical research, Nazi Germany and the United States of America account for the overwhelming majority of my examples. So let’s all keep a bit of perspective.

Risks, Benefits, and Arguments

Instead, I want to talk about this paper in the context of risks and benefits, and proposed regulatory action around CRISPR. Let me be clear: I think we need to proceed very carefully with CRISPR technologies, particularly as we approach clinical applications. There was a worry that a group had used CRISPR on human embryos. That worry was vindicated yesterday.

Well, sort of. Almost. Not really?

From where I sit, the central concern is best expressed in terms of the risks of using CRISPR techniques on potentially viable human embryos. Commentary in Nature News highlights this concern perfectly:

Others say that such work crosses an ethical line: researchers warned in Nature in March that because the genetic changes to embryos, known as germ line modification, are heritable, they could have an unpredictable effect on future generations. 

The central premises are that 1) CRISPR studies on viable human embryos could lead to significant genetic changes to the resulting live humans; 2) these genetic changes could have unpredictable effects on those humans; 3) the changes could be propagated through human reproduction; and 4) this propagation of changes could have an unpredictable effect on future generations. The conclusion is that we shouldn’t be conducting studies on viable human embryos until we’ve done a lot more research, and have a better mechanism for ethically conducting such research. I support this argument.

The conclusion doesn’t obtain in this case, however, because these embryos weren’t viable. As in, they are never going to result in human beings, and never were. They are “potential human beings” to about the same degree that the Miller-Urey experiment is a potential human being.

[The Miller–Urey experiment (or Miller experiment) was a chemical experiment that simulated the conditions thought at the time to be present on the early Earth, and tested the chemical origin of life.]

Above: not a potential human being.

What the study does show—conclusively— is that the clinical applications for germ line editing require substantial research before they are safe and effective, and this research should be approached with incredible care. The sequence the scientists attempted to introduce into the embryos only took hold in a subset of the embryos tested. Those embryos that did take the change, also produced many off-target mutations (unwanted mutations in the wrong places on the genome). And even when the embryos did show the right mutation, it was only in some cells —the resulting embryos were chimeras, in which some cells possessed the mutation, and others didn’t.

This experiment shows that you can use CRISPR-Cas9 on a human embryo. But that isn’t really a revolutionary result. What is important is just how marginal the success was in terms of a clinically relevant outcome. The conclusion we should draw is that even starting in vivo testing with viable embryos is not only hazardous (for reasons Very Big Deals have noted), but totally futile relative to less risky, more basic scientific inquiry.

Rather than an ethical firestorm, I view this research as an opportunity. This study is, more or less, proof that a robust, community-centered deliberative process is needed to determine what the goals of future CRISPR research are, and what science is needed, in what order, to get there safely. A moratorium on in vivo testing in viable embryos is a valuable part of this process.

Ahistorical narratives in a time of science.

[Update: someone at The Atlantic confirmed for me that this was not so much their article, as it was run  “as part of our partnership with the site Defense One.” Defense One is a part of the AtlanticMedia group, which owns both publications. As the science editor for Defense One—where the piece was first published—it isn’t totally clear to me who edited Tucker’s work for content, other than… himself? Transparency and accountability, anyone?]

Patrick Tucker has a piece in The Atlantic titled “The Next Manhattan Project.” It concerns the current dual-use gain-of-function saga—now the so-called deliberative process about biosafety. It is, in short, a piece of ahistorical fiction. Here’s why—or, here is one list of reasons why.

1) “In January 2012, a team of researchers from the Netherlands and the University of Wisconsin published a paper in the journal Science about airborne transmission of H5N1 influenza, or bird flu, in ferrets.”

False. It was two papers: one in Nature by University of Wisconsin-Madision researchers; one in Science by Dutch researchers. When a writer for The Atlantic can’t Google something that happened 3 years ago, you can bet the previous century is going to be a challenge.

2) Eschewing the history behind current events: “[the 2012 paper (should be papers)] changed the way the United States and nations around the world approached manmade biological threats.”

False. The 2011 (it started in 2011, not 2012) controversy was a continuation of a, by then, decade-old debate about what is now called dual-use research of concern. This started in 2001, when a team of Australian researchers published work describing the creation of (in VERY simplistic terms) a super-poxvirus.There was a CIA report, and a NAS committee. Oh, and does anyone remember Amerithrax?

3) “it solved the riddle of how H5N1 became airborne in humans.”

False. Hilariously, the standard defense of the 2012 studies (remember, The Atlantic, plural) is that they don’t show how H5N1 can transmit via aerosolized respiratory droplets. Vincent Racaniello commonly refers to this as “ferrets are not people.” There’s a complexity about animal models that doesn’t lend to those kinds of easy conclusions. It wasn’t the end result of these papers (or the papers that followed), and it certainly wasn’t the intent of the researchers.

4) Eschewing the reasons behind the Manhattan Project.

The Manhattan Project has a complex history. A group of independent, politically minded—largely emigre—scientists; a world on the edge of war; a novel and particular scientific discovery with a potentially catastrophic outcome; and a belligerent power (well, powers—the Japanese and Russians had programs, in addition to the Nazis) the scientists had good reason to suspect was pursuing said technology.

The 2012 story has almost no parallel with these contexts—much less has an organizational, clearly defined set of ends, or unilateral mandate with which to achieve those ends. The existential threat in the background of the Manhattan Project is absent here—there is no Nazi power. If we truly considered H5N1 highly pathogenic avian influenza to be an existential threat, our public health systems and scientific endeavors would look totally different.

5) Misrepresenting the classified complex.

Despite it being the single comparison Tucker draws between the 2012 studies (plural) and the Manhattan Project, Tucker doesn’t discuss the classified complex as any more than a passing comment. He boils the entire conversation down to “but now the Internet makes classifying things hard.”

Never mind that the classified community was remarkably successful at its job, to the point where it invented ways to create information sharing within an environment of total secrecy. The classified community continues to do its work today—just because we don’t pay much attention to Los Alamos, Oak Ridge, or Lawrence Livermore don’t mean they don’t exist.

Tucker also misses some of the human factors that would actually make his claims interesting. Between Fuchs and the Rosenbergs, ye olde security could be compromised in much the same way as it is today: too much trust of the wrong people, and a bit of carelessness inside the confines of a community that thinks itself insulated. If anything, the current debate about dual-use is more about misplaced trust and overconfidence than it is about nukes.

***

These are only five of a variety of problems with Tucker’s article. What bothers me most is that the headline grants a legitimacy to one perspective on the current debate that simply isn’t warranted. These scientists aren’t racing against the clock to avert a catastrophe—and if they are, their methods are questionable at best. The current debate is far more nuanced, and far less certain than the conversation that went down in Long Island in 1939. And that’s saying something, because the debate then was pretty damned nuanced.

What would the Next Manhattan Project really look like? Lock the best minds in biology in a series of laboratories across the country—or world, that’s cool too. Give them at least $26 billion. And give them charge of creating a cheap, easily deployable, universal flu vaccine.

That’d be great. Or, at least, it’d be much better than The Atlantic’s piece from yesterday.

Book Interest: A Straw Poll

A question for readers of this blog, and followers on Twitter. If I and two co-editors were to release an interdisciplinary edited collection on the 2013-2015 Ebola Virus Disease Outbreak, would you read it? This collection would cover topics including:

  • Virology;
  • Clinical Medicine;
  • Epidemiology;
  • Ecology;
  • Political Science;
  • Anthropology;
  • Journalism;
  • Health Law;
  • Bioethics.

If this is something that interests you, leave a comment, reply to me on Twitter, or drop me an email at neva9257 [at] Gmail dot com. If you can, please note your country of residence, field that you work in (research discipline, teaching/policy/research/public health, etc.) , and what you’d use such a volume for (reference, scholarship, teaching, general interest, coffee table, doorstop, etc.)

This is a project I’ve had in the works for some time, and my colleagues and I are almost at a contract. Demonstrating some interest will get us over that line.

Fetishizing science in a time of Ebola

It didn’t matter to me – I was in it for the science. –GLaDOS.

Science provides us knowledge. But—for most—science and the knowledge it brings isn’t the only or most important thing out there. Modern biomedical ethics is built upon, among other seminal statements, the Declaration of Helsinki, which states that:

While the primary purpose of medical research is to generate new knowledge, this goal can never take precedence over the rights and interests of individual research subjects.

That’s one of the reasons it was so concerning to stumble across a new article on Scientia Salon, brainchild of CUNY Professor of Philosophy Massimo Pigliucci, in which it was argued that anyone who wants to receive an experimental treatment for the Ebola virus must enroll in a placebo-controlled, randomized clinical trial. The author, evolutionary biologist Joanna Monti-Masel, claims that

The unethical behavior here…is not doing an experiment, but doing an experiment without using a control group. There should be no compassionate use exceptions. Everybody who wants these treatments should have to enter a randomized trial to have a chance of getting them.

…At least five patients have received a potentially effective treatment, but nobody has yet been assigned to a control group. This is the ethical travesty, and it needs to stop.

For those catching up, the Ebola outbreak that is believed to have started in December of 2013 has infected 1,975 people, and killed 1,069. Last week, the WHO declared that the outbreak constituted an Public Health Emergency of International Concern; this week, a WHO ethics advisory committee released a statement approving the use of unregistered interventions for Ebola. Kelly Hills and I have written on the ethics of latter part here.

Monti-Masel, however, thinks that the WHO is acting unethically by allowing for “compassionate use,” which the WHO is defining here as access to an unapproved drug outside of a clinical trial. Rather, she argues, anyone who wants access to these new drugs must be part of a clinical trial.

And not just any trial. Monti-Masel believes that the only way we ought to collect data about these experimental interventions is by setting up a double-blind, randomized clinical trial in the middle of an epidemic in West Africa. That is, we offer those suffering from Ebola—which is primarily killing vulnerable members of the nations of Sierra Leone, Guinea, Liberia, and Nigeria—something, which may or may not be a drug that may or may not work, or might be a sugar pill. We don’t tell them which one it is, because we won’t know which it was until after the fact. We then determine who gets better, and which ones get worse. And if that isn’t what the patient wants, then they get nothing.

Science for the sake of science

It is worth noting, first up, that Monti-Masel is reacting to a hypothetical storm in a nonexistent teacup. Aside from the WHO ethics committee’s general recommendations that data collection is necessary, but must be collected and shared ethically and equitably, there’s nothing else to suggest what types of data collection, and study design, might be permissible. Moving immediately to the need for control groups and randomized trials presumes a lot about the WHO’s mild statement, and jumps the gun on a lot of serious ethics that has to happen before—and has to involve the countries that are actually in the middle of this outbreak.

But more importantly, Monti-Masel fetishizes data collection above and beyond any other consideration. The main benefits she cites is the power of the studies that could be achieved through a randomized trial. All other concerns are secondary, it seems, to the possibility of doing accurate—not good—science.

glados_vector_by_pixel5_exe-d4le9p5

But that’s wrongheaded; more, it is paternalistic and racist—to say nothing of deeply myopic—to presume that the central purpose of these interventions ought to be to collect data. Collecting data is necessary; collecting data is good. But the ways in which collect that data, and the quality of data we should aim for, must be subject to the central aim of these interventions: “to try to save the lives of patients and to curb the epidemic.”

Requiring participation in a randomized, placebo-controlled clinical trial in order to access these drugs is deeply paternalistic. It presumes that the priorities defined by Monti-Masel’s hypothetical study are not only the best for the scientist, but the best for everyone. It doesn’t even consider that the people of a country embroiled in an Ebola epidemic might not want to participate in a placebo-controlled trial. Indeed, Monti-Masel claims that compassionate use outside of a clinical trial would be “an ethical travesty” because it would give us lower quality data for lack of a suitable (placebo) control group.

That reduces the patient to a mere data point. Considering the lengthy history of drug testing in African countries that leave the communities in which they are tested worse off, it is not hard to detect the racism implicit in such an extreme study design. History is rife with contributions “to the legacy of black bodies being manipulated and violated for ‘the benefit of all.’” To be ethical, research—even in small samples such as must occur with the limited experimental agents present for this outbreak of Ebola—must take into account practices that respect the agency and priorities of the participant. People are not simply data points; vulnerable populations are not simply a convenient supply of test subjects.

Finally, advocating placebo controls simply misses the ethical forest for the trees. Monti-Masel claims:

One final concern is that many Africans are suspicious of Western doctors and experiments, and that their fears will keep them away. That’s fine, at least for now. That’s what the ethical principle of autonomy is about, crystallized in the notion of informed consent.

Rather, we should simply let those prospective patients know that if they want a chance at accessing an intervention, they have to enroll. If not, they are just out of luck—a move that is, as already discussed, paternalistic and racist. This doesn’t concern Monti-Masel, however:

Plenty of infected Africans will probably refuse to take part in the trial. That’s okay, because we don’t have enough treatments to go around anyway.

This completely misses the point of the “suspicion of Western doctors” about which people are concerned. To know why, ask this question instead: what happens when it becomes clear that Western doctors are only offering the hope of drugs if you’ll sign up to their trials? In an outbreak in which people are already avoiding reporting cases, or hiding loved ones that are infected with the virus; for a disease that, historically, has generated mistrust in Western interventions; what does Monti-Masel think the impact of a placebo-controlled trial will be for the outbreak as a whole? I know there are no RCTs on that topic, but any student of history would be able to tell you that things will go downhill fast. Trust is a vital element in a public health intervention. Jeopardizing that trust in the name of a small trial values the data above the lives of everyone in the outbreak.

This is a disappointing article in Scientia Salon. The submission itself is problematic; the comments are horrific. One hopes that in the future the site will look closely at submissions, in light of the enthusiastic endorsement of enforced autopsies that has resulted from this post. There is power in the platform, and in the middle of an outbreak this is not a productive contribution.

Doing better means asking, not telling

Hills and I have already argued, at length, about the ethics of distributing these drugs. We’ve advocated that those “who have traditionally and unjustly held power over African nations, step back and accept our role as people to provide assistance, rather than determine it.” It isn’t surprising that Monti-Masel’s reply has struck the worst possible chord.

Moreover, there is still good science that we should be doing, in partnership with the countries that are the site of this outbreak. Giving up the fetish of the placebo control group doesn’t mean giving up on trials, or good research. It just recognizes that certain kinds of study are unethical to pursue in the context of a dangerous disease outbreak, among a vulnerable population that are prone to exploitation by Western powers.

At the end of the day, we want science to be commensurate with other important values. Making science—one type of science—the be-all and end-all of our ethical considerations takes the very important role of knowledge in promoting human health, wealth, and security, and perverts it. We do not want to be like GLaDOS, the super-intelligent, passive aggressive antagonist of the Portal franchise.

Our motto should never be:

I’ve experiments to run,

There is research to be done;

On the people who are still alive.

A Risk-Benefit Analysis is not a Death Sentence

As is stated by Marc Lipsitch on the Cambridge Working Group site, the CWG reflects a consensus. My personal views do not reflect the views of the group. When you build a consensus, you often don’t end up with everything you wanted. When a group of very different people forms around a common issue, the outcomes that get devised are heavily moderated by the competing priorities and backgrounds of the participants. Sometimes that leads to a stagnation.[1] Other times, it leads to a more reasonable and practical set of priorities. In the case of the Cambridge Working Group, in which I participated as a founding member last month, our Consensus Statement on the Creation of Potential Pandemic Pathogens (PPPs) was the product of deliberation on the types of steps the eighteen founding members could agree on. For those of you who are just arriving, PPP studies involve the creation of a novel pathogen that could, if released, cause a disease pandemic. In my line of work, PPP studies are a type of “gain of function” study, and associated with dual-use research—scientific research that can be used to benefit or harm humanity. When it comes to PPP studies, the CWG stated one ultimate goal:

Experiments involving the creation of potential pandemic pathogens should be curtailed until there has been a quantitative, objective and credible assessment of the risks, potential benefits, and opportunities for risk mitigation, as well as comparison against safer experimental approaches.

And one proximate goal in the pursuit of that ultimate goal:

A modern version of the Asilomar process, which engaged scientists in proposing rules to manage research on recombinant DNA, could be a starting point to identify the best approaches to achieve the global public health goals of defeating pandemic disease and assuring the highest level of safety.

In short, we want to ask a question: what are the risks and benefits of PPP studies? To ask that question, we want to convene a meeting. And though we’ve no ability to stop them, we’d really like it if scientists could just, I don’t know, not make any new and improved strains of influenza before we have that meeting. Simple, right? Well, I thought so. Which is why I was surprised when colleague said this:

Wait what?!

Hyperbole is Not Helping

NewProf is right: *if* we shut down (all) BSL-3/4 labs, there would be nowhere (safe) for people to work on dangerous pathogens like Ebola, or train new people to do the same. The only problem is that no-one—that I know of—is saying that.

First: the CWG statement says nothing about shutting down laboratories. As a consensus statement, it is necessarily limited by pragmatic considerations. The CWG calls for a risk assessment. It calls for collecting data. That data collection is focused on PPP studies, and primarily in the context of influenza research. Even if the CWG were to be looking at Ebola, PPP studies would (I really, really hope) be a very small subset of Ebola research. Of course, NewProd is not concerned only about individual research, but whole labs:

That is, NewProf claims that a CWG-inspired risk assessment would lead to labs shutting down, which would lead to there being “no scientists trained to study/treat/find cures for Ebola.” But that’s equally ludicrous. A risk assessment of a small set of experiments would be unlikely to result in an entire field being unable to perform. In fact, that would be a really bad thing. The risk of that bad thing would—ought—to be something that informs the risk-benefit analysis of research in the life sciences. Regulation that unduly limits the progress of genuinely (or even plausibly) beneficial research, without providing any additional benefit, would be bad regulation.

Grind Your Axe on Your Own Time

What is most frustrating, however, is how mercenary the whole thing feels. If you are concerned about the Ebola virus, you should be concerned that the public health effort to stem the tide of the virus in West Africa is failing. That a combination of poverty, civil unrest, environmental degradation,, failing healthcare, traditional practices, and a (historically justified) mistrust of western healthcare workers is again the perfect breeding ground for the Ebola virus. You shouldn’t be concerned about a risk-benefit analysis that has been advocated for a particular subset of scientific experiments—with a focus on influenza—that may or may not lead to some outcome in the future. Dual-use research and the Ebola virus, right now, have very little to do with each other. If there comes a time where researchers decide they want to augment the already fearsome pathology caused by the virus with, say, a new and improved transmission mechanism, we should definitely have a discussion about that. That, I think it is uncontroversial to say, would probably be a very bad idea.

A Personal View of Moving Forward

I’ve been present the last few days talking about Ebola, primarily on Twitter (and on other platforms whenever someone asks). I’ve not had a lot of time to talk about the CWG’s statement, or my views on the types of questions we need to ask in putting together a comprehensive picture of the types of risks and benefits posed by PPP studies. So here’s a few thoughts, because it is apparently weighing on people’s minds quite heavily. I don’t know how many high-containment labs are needed to study the things we need to study in order to improve public health. I know Richard Ebright, in the recent Congressional Subcommittee Hearing on the CDC Anthrax Lab “incident” mentioned a figure of 50, but I don’t know of the basis on which he made that claim. As such, I, personally, wouldn’t back such a number without more information. I do know that the question of risk and benefits of PPP studies—and other dual-use research—has been a decade in the making. The purported benefits to health and welfare of gain-of-function research, time and again, fail to meet scrutiny Something needs to happen. The next step is an empirical, multi-disciplinary analysis of the benefits and risks of the research. It has to be empirical because we need to ground policy in rigorous evidence. It has to be multi-disciplinary because first, the question itself can’t be answered by one group; second, the values into which we are inquiring cover more than one set of interests. That, as I understand it, is what the CWG is moving towards. That’s certainly why I put my name to the Consensus Statement. I’m coming into that risk-assessment process looking for an answer, not presuming one. I’m not looking to undermine any single field of research wholesale. And frankly, I find the use of the current tragedy in West Africa as an argumentative tool pretty distasteful.


  1. The twists and turns of consensus-building are playing out on a grand scale at the current experts meeting of the Biological and Toxins Weapons Convention in Geneva. My colleagues are participating as academics and members of NGOs at the meeting, and you can follow them at #BWCMX. And yes, I’m terribly sad to not be there. Next time, folks.  ↩