Tag Archives: Facebook

Treating like cases alike: bioethics edition

Crash course in philosophy—treat like cases alike.[1]

That means that if you think a much-hyped article on the health benefits of chocolate is unethical, your reasons for this conclusion ought to apply consistently across other similar cases. Put another way, if you think John Bohannon acted unethically in publishing a study that claimed to show a relationship between the success of weight-loss diets and chocolate consumption, fooling the science journalism world; and if your reasons for thinking this bear on relevantly similar cases, you ought to arrive at similar conclusions about the ethics of those cases as you do about Bohannon.

1) For example, you might think—as I do—that whether or not the study was intended as a hoax, some human subjects protection ought to be enforced. You might worry that there’s no mention of research ethics review or informed consent in the journal article that caused the furor, or any of Bohannon’s writings since. Kelly asked Bohannon on io9 what kind of approval he got for the study, if any. I’ll update if anything comes of that.

But if you do have this concern, then such reasoning should lead you to treat this study conducted by Facebook with similar suspicion. We know that no IRB approved the protocol for that study. We know that people didn’t give meaningful consent.[2]

2) Say you are worried about the consequences of a false study being reported widely by the media. It is true that when studies are overhyped, or falsely reported, all kinds of harm can result.[3] One could easily imagine vulnerable people deceived by the hype.

But if you think that, consider that in 2013, 23andMe was marketing a set of diagnostics for the genetic markers of disease in the absence of any demonstrated analytic or clinical validity. They were selling test kits with no proof that the tests could be interpreted in a meaningful way.

I think that the impact of the latter is actually much greater than the former. One is a journalist with a penchant and reputation for playing tricks on scientific journals. The other is a billion-dollar, Google “we eat DARPA projects for breakfast” backed, industry leader.

But if Bohannon’s actions do meet your threshold for what constitutes an unacceptable risk to others, you are committed to concerns about any and all studies that could harm distant others that reach that threshold.


If you think Bohannon was out of line, that Facebook was unethical in avoiding oversight, and it was inappropriate for 23andMe to flaunt FDA regulations and market their genetic test kits without demonstrating their validity, then you are consistent.

If you don’t, that’s not necessarily a problem. The burden of proof, though, is now on you to show either a) that you hold one of the above reasons, but the cases are necessarily different; or b) that the reasons you think Bohannon is out of line are sufficiently unique that they don’t apply in the other cases.

Personally? I think the underlying motivation behind a lot of the anger I see online is that scientists, and science communicators, don’t like being made out as fools. That’s understandable. But if that’s the reason, then I don’t think Bohannon is at fault. Journals have an obligation to thoroughly review articles, and Bohannon is notorious for demonstrating just how many predatory journals there are in the scientific landscape today. Journalists have a responsibility to do their jobs and critically examine the work on which they report.

Treat like cases alike, and use this moment for what it should be: a chance to reflect on our reasons for making moral judgement, and our commitment to promoting ethical science.


  1. Like anything in philosophy, there are subtleties to this ↩
  2. Don’t you even try to tell me that a EULA is adequate consent for scientific research. Yes, you. You know who you are. I’m watching you.  ↩
  3. Fun fact: that was the subject of my first ever article!  ↩

That Facebook Study: Update

UPDATE 30 June 2014, 8:00pm ET: Since posting this, Cornell has updated their press release to state that the Army did not fund the Facebook study. Moreover, Cornell has released a statement clarifying that their IRB

concluded that [the authors from Cornell were] not directly engaged in human research and that no review by the Cornell Human Research Protection Program was required.

Where this leaves the study, I’m not sure. But clearly something is amiss: we’re still sans ethical oversight, but now with added misinformation.

 ***

So there’s a lot of news flying around at the moment about the study “Experimental evidence of massive-scale emotional contagion through social networks,” also known as That Facebook Study. Questions are being asked about the ethics of the study; while I want to post a bit more on that issue later, a couple of facts for those following along.

Chris Levesque pointed me to a Cornell University press release noting that the study in question received funding from the US Army Research Office. That means the study did receive federal funding; receipt of federal funding comes with a requirement of ethics oversight, and compliance with the Common Rule. It is also worth noting that the US Army Research Office has their own guidelines for research involving human subjects:

Research using human subjects may not begin until  the U.S. Army Surgeon General’s Human Subjects Research Review  Board (HSRRB) approves the protocol [Article 13, Agency Specific Requirements]

and

Unless otherwise provided for in this grant, the recipient is expressly forbidden to use or subcontract or subgrant for the use of human subjects in any manner whatsoever [Article 30, “General Terms and Conditions for Grant Awards to For-Profit Organizations“]

***

I’ve also been in touch with Susan Fiske, the editor of the study. Apparently, the Institutional Review Board (IRB) that approved the work is Cornell’s IRB. That IRB found the study to be ethical:

on the grounds that Facebook filters user news feeds all the time, per the user agreement. Thus, it fits everyday experiences for users, even if they do not often consider the nature of Facebook’s systematic interventions. The Cornell IRB considered it a pre-existing dataset because [Facebook] continually creates these interventions, as allowed by the user agreement (Personal Communication, Fiske, 2014).*

So, there’s some clarification.

Still, I can’t buy the Cornell IRB’s justification, at least on Fiske’s recounting. Manipulating a user’s timeline with the express purpose of changing the user’s mental state is, to me, a far cry from business as usual. Moreover, I’m really hesitant to call an updating Facebook feed a “pre-existing dataset.” Finally, better people than I have talked about the lack of justification the Facebook user agreement provides.

This information, I hope, clarifies a couple of outstanding issues in the debate so far. Personally, I’d still like to see a lot more information about the kind of oversight this study received, and more details on the Cornell IRB’s analysis.

* Professor Fiske gave her consent to be quoted in this post.