Trigger Warnings

Semester is just about to start at the University of Massachusetts, Lowell (UML), where I’ll be teaching Engineering Ethics.* I won’t be teaching military ethics this semester. If I was, there would be trigger warnings all over that class.**


Because—among any other reason to add trigger warnings to a class—Lowell is home to the second largest Cambodian diaspora in the United States. One of the largest in the world. And I know that UML takes a lot of local students.

So when I talk about genocide, about war crimes, about intrastate violence, you can bet I need to be aware that there are most likely students in my class who fled, or whose parents fled the Khmer Rouge.

That won’t stop me talking about those issues. Not at all. But people need to be prepared for some things—hell, people’s families might need preparation. That doesn’t infringe on my academic freedom one iota.

I’m not a clinical psychologist, so it’s not my job to judge just how much exposure people can or should receive around their trauma. Moreover, not one person in my class consented to treatment—education isn’t therapy.

So if you can’t wrap your imagination around why trigger warnings might be necessary, why don’t you start thinking about people who are victims of genocide.

*I’ll be including trigger warnings in Engineering Ethics as well, because I’ll be talking about rape and sexual assault in the profession of engineering.

**And you should get trigger warnings anyway in military ethics, because just about everything we discuss in that class is the worst things you can do to people, individually or in groups.

Treating like cases alike: bioethics edition

Crash course in philosophy—treat like cases alike.[1]

That means that if you think a much-hyped article on the health benefits of chocolate is unethical, your reasons for this conclusion ought to apply consistently across other similar cases. Put another way, if you think John Bohannon acted unethically in publishing a study that claimed to show a relationship between the success of weight-loss diets and chocolate consumption, fooling the science journalism world; and if your reasons for thinking this bear on relevantly similar cases, you ought to arrive at similar conclusions about the ethics of those cases as you do about Bohannon.

1) For example, you might think—as I do—that whether or not the study was intended as a hoax, some human subjects protection ought to be enforced. You might worry that there’s no mention of research ethics review or informed consent in the journal article that caused the furor, or any of Bohannon’s writings since. Kelly asked Bohannon on io9 what kind of approval he got for the study, if any. I’ll update if anything comes of that.

But if you do have this concern, then such reasoning should lead you to treat this study conducted by Facebook with similar suspicion. We know that no IRB approved the protocol for that study. We know that people didn’t give meaningful consent.[2]

2) Say you are worried about the consequences of a false study being reported widely by the media. It is true that when studies are overhyped, or falsely reported, all kinds of harm can result.[3] One could easily imagine vulnerable people deceived by the hype.

But if you think that, consider that in 2013, 23andMe was marketing a set of diagnostics for the genetic markers of disease in the absence of any demonstrated analytic or clinical validity. They were selling test kits with no proof that the tests could be interpreted in a meaningful way.

I think that the impact of the latter is actually much greater than the former. One is a journalist with a penchant and reputation for playing tricks on scientific journals. The other is a billion-dollar, Google “we eat DARPA projects for breakfast” backed, industry leader.

But if Bohannon’s actions do meet your threshold for what constitutes an unacceptable risk to others, you are committed to concerns about any and all studies that could harm distant others that reach that threshold.

If you think Bohannon was out of line, that Facebook was unethical in avoiding oversight, and it was inappropriate for 23andMe to flaunt FDA regulations and market their genetic test kits without demonstrating their validity, then you are consistent.

If you don’t, that’s not necessarily a problem. The burden of proof, though, is now on you to show either a) that you hold one of the above reasons, but the cases are necessarily different; or b) that the reasons you think Bohannon is out of line are sufficiently unique that they don’t apply in the other cases.

Personally? I think the underlying motivation behind a lot of the anger I see online is that scientists, and science communicators, don’t like being made out as fools. That’s understandable. But if that’s the reason, then I don’t think Bohannon is at fault. Journals have an obligation to thoroughly review articles, and Bohannon is notorious for demonstrating just how many predatory journals there are in the scientific landscape today. Journalists have a responsibility to do their jobs and critically examine the work on which they report.

Treat like cases alike, and use this moment for what it should be: a chance to reflect on our reasons for making moral judgement, and our commitment to promoting ethical science.

  1. Like anything in philosophy, there are subtleties to this ↩
  2. Don’t you even try to tell me that a EULA is adequate consent for scientific research. Yes, you. You know who you are. I’m watching you.  ↩
  3. Fun fact: that was the subject of my first ever article!  ↩

Comments on the NSABB Meeting on Gain of Function

On May 5, 2015, the National Science Advisory Board for Biosecurity held a meeting to review the gain-of-function deliberative process, and solicit feedback on their draft framework on the process (published April 6).

As part of that meeting, I am presenting public comment on the ethics of the deliberative process. A copy of the handout I provided to the members of the NSABB—updated to correct a couple of typographical errors—is available here.

You can also view the webcast of my comments live. I am not sure when I’ll be speaking—the public comment sessions are planned for 2:00pm-2:30pm, and again at 3:30pm-3:50pm. However, if you want to watch me give comment (or the rest of the meeting) the webcast is available here.

PLoS: Revolution, or Mere Brand?

Late last week I spoke with Dr. Danielle N. Lee about having a paper rejected from a scientific journal—from the policy section of a scientific journal—on the grounds that it wasn’t considered sufficiently interesting. The topic was a US government process examining the risks and benefits of “gain-of-function research resulting in the creation of potential pandemic pathogens.” My paper was an examination of the ethical issues associated with the process, including the issue of representation in the governing bodies that framed, pursued, and answered the policy process.

The publication that rejected this paper was PLoS Medicine.

The editorial committee of that journal noted that my article:

provides useful insight into topics, such as competing interests and how they might be managed…Unfortunately, we do feel that your article will be of most interest to those currently involved in debates around [dual-use research of concern] and not the wider audience that reads PLOS Medicine.

The abstract of my article, which included mention of competing interests, and explicitly noted the ethical focus of the paper, received a favorable presubmission inquiry. I was faithful to the limits the editor assigned to my paper provided, but at the end of the day it was decided that the article simply wasn’t relevant to the PLoS audience.

I’m not terribly upset about the rejection: it is part of the business of writing for journals. And I have no problem with a journal’s disinterest in my topic—that happens all the time. The issue, however, is that PLoS has, and does publish articles on this topic, written by scientists, (not bioethicists, or policy analysts) just this month. So it isn’t the topic that is uninteresting. Rather, it is a close examination of the topic, from a perspective that isn’t already enmeshed in the dominant narrative of science, that isn’t interesting. Scientists, according to PLoS, aren’t interested in a critique of representation and ethics in science and science policy—especially, I gather, if that critique comes from outside the scientific establishment.

This presents a conflict of interest for scientific publications. Journals have very little incentive to challenge the government and funding agencies that are responsible for the research that fills their pages. They have even less incentive to publish work that challenges the commitments of their readership. It is a more or less heroic act for a journal to publish an article that takes its readership to task; the first example that comes to mind is the New England Journal of Medicine publication authored by Henry Beecher (paywall), calling out unethical medical expeiriments that occured in the twenty years following World War Two.

Instead, the bioethics and health policy articles that reach the scientific community are largely situated as op-eds (reinforcing the opinion that bioethics and health policy are “mere opinion”), privilege scientists and physicians as authors, are limited in their scope, and are likely to be conservative relative to the values of the scientific community. Everything else is likely to be relegated to journals that the scientific community don’t read. That’s not exactly a recipe for progress.

PLoS has the capacity to change that, but I fear that it won’t. For all that it is marketed as a revolution, open access doesn’t change scientists, and thus is unlikely to change science. Changing an access barrier doesn’t mean that the culture embedded within that system must necessarily change.

I wasn’t planning on publishing this until I’d finished a couple of other projects, but today it was revealed that a peer reviewer—a single peer reviewer—caused a paper under review in the PLoS family to be rejected on sexist and inappropriate grounds. The article, authored by Fiona Ingleby (University of Sussex) and Megan Head (Australian National University) , investigated gender differences in Ph.D.-to-postdoctoral transitions. The charming review had this to say:

It seems that more than just having a problem with ethics and ethicists, PLoS isn’t capable of holding a sustained conversation about the social, ethical, and political structure of science. Its editorial process allowed a paper to be rejected, under peer review, because there weren’t enough male authors. In many ways, PLoS’ open-access policy allows those historically excluded from science to see what is going on. Getting in and providing a substantive critique of that exclusion, however, faces significantly higher access barriers than a paywall.

That’s a blow to an organization whose authors—and, I presume, readership—describe its business model as a revolution. Other journals don’t promote the same image of their efforts changing the way science is done: if you want to call Nature an embodiment of science today—flaws and all—that’s more or less consistent with how Nature purports to operate.  PLoS claims a different status, but I fear it is more a rhetorical device than a substantive change in the way business is really done in science.

One of my doctoral supervisors, Seumas Miller (who I can almost guarantee will never be published in PLoS) noted that for a system to possess integrity, it needs more than simply the right set of top-down regulations. It also needs a commitment to ethics, and it needs to put that front and center. It has to structure its regulatory framework—and peer-review and editorial are both regulatory frameworks—in a way that promotes and reflects the ethical commitments of the institution and its members.

I’m not sure that an editorial-driven, non-anonymised review process can do that. A journal family that doesn’t establish a commitment to the social and ethical issues in its own field, and designs its journals and review processes around that, won’t succeed at generating substantive changes in the way science gets done. PLoS is a political project within science, but it needs to operate on a basis of reform that is more developed than a mere commitment to open access. The revolution won’t be complete until that happens.

I don’t expect a PLoS Ethics and Science Policy to happen any time soon. Researchers in that domain rarely have the $2900 on hand for publication in PLoS, and I don’t expect the journal family will fund such an endeavor at a loss. But if there is room for that kind of venture—and the Eisens still want to talk to me after this—I think it would be truly revolutionary.

Comments on That CRISPR Paper

If you’ve got a pulse and are interested in biology, you’ve probably heard that a team of scientists have reported successfully (well, kinda) conducting germ line editing on human embryos using the CRISPR-Cas9 technique. I started hearing rumors of this study around the time that a moratorium on germ line experiments in humans was being proposed by some Very Big Deals. With confirmation that the study is real, the bioethics and life sciences worlds are all a-twitter (somewhat literally).

There’s an ugly side to the current furor, and a lot of it has to do with the nationality of the research team. Apparently the fact that Chinese researchers conducted the study has given people cause for alarm. That there is straight up racism, seasoned liberally with some vintage Cold War nonsense; Kelly has gone over this in a lot more detail. I won’t say any more on this, except to remind that people that when I teach about unethical research, Nazi Germany and the United States of America account for the overwhelming majority of my examples. So let’s all keep a bit of perspective.

Risks, Benefits, and Arguments

Instead, I want to talk about this paper in the context of risks and benefits, and proposed regulatory action around CRISPR. Let me be clear: I think we need to proceed very carefully with CRISPR technologies, particularly as we approach clinical applications. There was a worry that a group had used CRISPR on human embryos. That worry was vindicated yesterday.

Well, sort of. Almost. Not really?

From where I sit, the central concern is best expressed in terms of the risks of using CRISPR techniques on potentially viable human embryos. Commentary in Nature News highlights this concern perfectly:

Others say that such work crosses an ethical line: researchers warned in Nature in March that because the genetic changes to embryos, known as germ line modification, are heritable, they could have an unpredictable effect on future generations. 

The central premises are that 1) CRISPR studies on viable human embryos could lead to significant genetic changes to the resulting live humans; 2) these genetic changes could have unpredictable effects on those humans; 3) the changes could be propagated through human reproduction; and 4) this propagation of changes could have an unpredictable effect on future generations. The conclusion is that we shouldn’t be conducting studies on viable human embryos until we’ve done a lot more research, and have a better mechanism for ethically conducting such research. I support this argument.

The conclusion doesn’t obtain in this case, however, because these embryos weren’t viable. As in, they are never going to result in human beings, and never were. They are “potential human beings” to about the same degree that the Miller-Urey experiment is a potential human being.

[The Miller–Urey experiment (or Miller experiment) was a chemical experiment that simulated the conditions thought at the time to be present on the early Earth, and tested the chemical origin of life.]

Above: not a potential human being.

What the study does show—conclusively— is that the clinical applications for germ line editing require substantial research before they are safe and effective, and this research should be approached with incredible care. The sequence the scientists attempted to introduce into the embryos only took hold in a subset of the embryos tested. Those embryos that did take the change, also produced many off-target mutations (unwanted mutations in the wrong places on the genome). And even when the embryos did show the right mutation, it was only in some cells —the resulting embryos were chimeras, in which some cells possessed the mutation, and others didn’t.

This experiment shows that you can use CRISPR-Cas9 on a human embryo. But that isn’t really a revolutionary result. What is important is just how marginal the success was in terms of a clinically relevant outcome. The conclusion we should draw is that even starting in vivo testing with viable embryos is not only hazardous (for reasons Very Big Deals have noted), but totally futile relative to less risky, more basic scientific inquiry.

Rather than an ethical firestorm, I view this research as an opportunity. This study is, more or less, proof that a robust, community-centered deliberative process is needed to determine what the goals of future CRISPR research are, and what science is needed, in what order, to get there safely. A moratorium on in vivo testing in viable embryos is a valuable part of this process.

Ahistorical narratives in a time of science.

[Update: someone at The Atlantic confirmed for me that this was not so much their article, as it was run  “as part of our partnership with the site Defense One.” Defense One is a part of the AtlanticMedia group, which owns both publications. As the science editor for Defense One—where the piece was first published—it isn’t totally clear to me who edited Tucker’s work for content, other than… himself? Transparency and accountability, anyone?]

Patrick Tucker has a piece in The Atlantic titled “The Next Manhattan Project.” It concerns the current dual-use gain-of-function saga—now the so-called deliberative process about biosafety. It is, in short, a piece of ahistorical fiction. Here’s why—or, here is one list of reasons why.

1) “In January 2012, a team of researchers from the Netherlands and the University of Wisconsin published a paper in the journal Science about airborne transmission of H5N1 influenza, or bird flu, in ferrets.”

False. It was two papers: one in Nature by University of Wisconsin-Madision researchers; one in Science by Dutch researchers. When a writer for The Atlantic can’t Google something that happened 3 years ago, you can bet the previous century is going to be a challenge.

2) Eschewing the history behind current events: “[the 2012 paper (should be papers)] changed the way the United States and nations around the world approached manmade biological threats.”

False. The 2011 (it started in 2011, not 2012) controversy was a continuation of a, by then, decade-old debate about what is now called dual-use research of concern. This started in 2001, when a team of Australian researchers published work describing the creation of (in VERY simplistic terms) a super-poxvirus.There was a CIA report, and a NAS committee. Oh, and does anyone remember Amerithrax?

3) “it solved the riddle of how H5N1 became airborne in humans.”

False. Hilariously, the standard defense of the 2012 studies (remember, The Atlantic, plural) is that they don’t show how H5N1 can transmit via aerosolized respiratory droplets. Vincent Racaniello commonly refers to this as “ferrets are not people.” There’s a complexity about animal models that doesn’t lend to those kinds of easy conclusions. It wasn’t the end result of these papers (or the papers that followed), and it certainly wasn’t the intent of the researchers.

4) Eschewing the reasons behind the Manhattan Project.

The Manhattan Project has a complex history. A group of independent, politically minded—largely emigre—scientists; a world on the edge of war; a novel and particular scientific discovery with a potentially catastrophic outcome; and a belligerent power (well, powers—the Japanese and Russians had programs, in addition to the Nazis) the scientists had good reason to suspect was pursuing said technology.

The 2012 story has almost no parallel with these contexts—much less has an organizational, clearly defined set of ends, or unilateral mandate with which to achieve those ends. The existential threat in the background of the Manhattan Project is absent here—there is no Nazi power. If we truly considered H5N1 highly pathogenic avian influenza to be an existential threat, our public health systems and scientific endeavors would look totally different.

5) Misrepresenting the classified complex.

Despite it being the single comparison Tucker draws between the 2012 studies (plural) and the Manhattan Project, Tucker doesn’t discuss the classified complex as any more than a passing comment. He boils the entire conversation down to “but now the Internet makes classifying things hard.”

Never mind that the classified community was remarkably successful at its job, to the point where it invented ways to create information sharing within an environment of total secrecy. The classified community continues to do its work today—just because we don’t pay much attention to Los Alamos, Oak Ridge, or Lawrence Livermore don’t mean they don’t exist.

Tucker also misses some of the human factors that would actually make his claims interesting. Between Fuchs and the Rosenbergs, ye olde security could be compromised in much the same way as it is today: too much trust of the wrong people, and a bit of carelessness inside the confines of a community that thinks itself insulated. If anything, the current debate about dual-use is more about misplaced trust and overconfidence than it is about nukes.


These are only five of a variety of problems with Tucker’s article. What bothers me most is that the headline grants a legitimacy to one perspective on the current debate that simply isn’t warranted. These scientists aren’t racing against the clock to avert a catastrophe—and if they are, their methods are questionable at best. The current debate is far more nuanced, and far less certain than the conversation that went down in Long Island in 1939. And that’s saying something, because the debate then was pretty damned nuanced.

What would the Next Manhattan Project really look like? Lock the best minds in biology in a series of laboratories across the country—or world, that’s cool too. Give them at least $26 billion. And give them charge of creating a cheap, easily deployable, universal flu vaccine.

That’d be great. Or, at least, it’d be much better than The Atlantic’s piece from yesterday.

Book Interest: A Straw Poll

A question for readers of this blog, and followers on Twitter. If I and two co-editors were to release an interdisciplinary edited collection on the 2013-2015 Ebola Virus Disease Outbreak, would you read it? This collection would cover topics including:

  • Virology;
  • Clinical Medicine;
  • Epidemiology;
  • Ecology;
  • Political Science;
  • Anthropology;
  • Journalism;
  • Health Law;
  • Bioethics.

If this is something that interests you, leave a comment, reply to me on Twitter, or drop me an email at neva9257 [at] Gmail dot com. If you can, please note your country of residence, field that you work in (research discipline, teaching/policy/research/public health, etc.) , and what you’d use such a volume for (reference, scholarship, teaching, general interest, coffee table, doorstop, etc.)

This is a project I’ve had in the works for some time, and my colleagues and I are almost at a contract. Demonstrating some interest will get us over that line.