Tag Archives: Research ethics

Comments on That CRISPR Paper

If you’ve got a pulse and are interested in biology, you’ve probably heard that a team of scientists have reported successfully (well, kinda) conducting germ line editing on human embryos using the CRISPR-Cas9 technique. I started hearing rumors of this study around the time that a moratorium on germ line experiments in humans was being proposed by some Very Big Deals. With confirmation that the study is real, the bioethics and life sciences worlds are all a-twitter (somewhat literally).

There’s an ugly side to the current furor, and a lot of it has to do with the nationality of the research team. Apparently the fact that Chinese researchers conducted the study has given people cause for alarm. That there is straight up racism, seasoned liberally with some vintage Cold War nonsense; Kelly has gone over this in a lot more detail. I won’t say any more on this, except to remind that people that when I teach about unethical research, Nazi Germany and the United States of America account for the overwhelming majority of my examples. So let’s all keep a bit of perspective.

Risks, Benefits, and Arguments

Instead, I want to talk about this paper in the context of risks and benefits, and proposed regulatory action around CRISPR. Let me be clear: I think we need to proceed very carefully with CRISPR technologies, particularly as we approach clinical applications. There was a worry that a group had used CRISPR on human embryos. That worry was vindicated yesterday.

Well, sort of. Almost. Not really?

From where I sit, the central concern is best expressed in terms of the risks of using CRISPR techniques on potentially viable human embryos. Commentary in Nature News highlights this concern perfectly:

Others say that such work crosses an ethical line: researchers warned in Nature in March that because the genetic changes to embryos, known as germ line modification, are heritable, they could have an unpredictable effect on future generations. 

The central premises are that 1) CRISPR studies on viable human embryos could lead to significant genetic changes to the resulting live humans; 2) these genetic changes could have unpredictable effects on those humans; 3) the changes could be propagated through human reproduction; and 4) this propagation of changes could have an unpredictable effect on future generations. The conclusion is that we shouldn’t be conducting studies on viable human embryos until we’ve done a lot more research, and have a better mechanism for ethically conducting such research. I support this argument.

The conclusion doesn’t obtain in this case, however, because these embryos weren’t viable. As in, they are never going to result in human beings, and never were. They are “potential human beings” to about the same degree that the Miller-Urey experiment is a potential human being.

[The Miller–Urey experiment (or Miller experiment) was a chemical experiment that simulated the conditions thought at the time to be present on the early Earth, and tested the chemical origin of life.]

Above: not a potential human being.

What the study does show—conclusively— is that the clinical applications for germ line editing require substantial research before they are safe and effective, and this research should be approached with incredible care. The sequence the scientists attempted to introduce into the embryos only took hold in a subset of the embryos tested. Those embryos that did take the change, also produced many off-target mutations (unwanted mutations in the wrong places on the genome). And even when the embryos did show the right mutation, it was only in some cells —the resulting embryos were chimeras, in which some cells possessed the mutation, and others didn’t.

This experiment shows that you can use CRISPR-Cas9 on a human embryo. But that isn’t really a revolutionary result. What is important is just how marginal the success was in terms of a clinically relevant outcome. The conclusion we should draw is that even starting in vivo testing with viable embryos is not only hazardous (for reasons Very Big Deals have noted), but totally futile relative to less risky, more basic scientific inquiry.

Rather than an ethical firestorm, I view this research as an opportunity. This study is, more or less, proof that a robust, community-centered deliberative process is needed to determine what the goals of future CRISPR research are, and what science is needed, in what order, to get there safely. A moratorium on in vivo testing in viable embryos is a valuable part of this process.

Advertisements

That Facebook Study: Update

UPDATE 30 June 2014, 8:00pm ET: Since posting this, Cornell has updated their press release to state that the Army did not fund the Facebook study. Moreover, Cornell has released a statement clarifying that their IRB

concluded that [the authors from Cornell were] not directly engaged in human research and that no review by the Cornell Human Research Protection Program was required.

Where this leaves the study, I’m not sure. But clearly something is amiss: we’re still sans ethical oversight, but now with added misinformation.

 ***

So there’s a lot of news flying around at the moment about the study “Experimental evidence of massive-scale emotional contagion through social networks,” also known as That Facebook Study. Questions are being asked about the ethics of the study; while I want to post a bit more on that issue later, a couple of facts for those following along.

Chris Levesque pointed me to a Cornell University press release noting that the study in question received funding from the US Army Research Office. That means the study did receive federal funding; receipt of federal funding comes with a requirement of ethics oversight, and compliance with the Common Rule. It is also worth noting that the US Army Research Office has their own guidelines for research involving human subjects:

Research using human subjects may not begin until  the U.S. Army Surgeon General’s Human Subjects Research Review  Board (HSRRB) approves the protocol [Article 13, Agency Specific Requirements]

and

Unless otherwise provided for in this grant, the recipient is expressly forbidden to use or subcontract or subgrant for the use of human subjects in any manner whatsoever [Article 30, “General Terms and Conditions for Grant Awards to For-Profit Organizations“]

***

I’ve also been in touch with Susan Fiske, the editor of the study. Apparently, the Institutional Review Board (IRB) that approved the work is Cornell’s IRB. That IRB found the study to be ethical:

on the grounds that Facebook filters user news feeds all the time, per the user agreement. Thus, it fits everyday experiences for users, even if they do not often consider the nature of Facebook’s systematic interventions. The Cornell IRB considered it a pre-existing dataset because [Facebook] continually creates these interventions, as allowed by the user agreement (Personal Communication, Fiske, 2014).*

So, there’s some clarification.

Still, I can’t buy the Cornell IRB’s justification, at least on Fiske’s recounting. Manipulating a user’s timeline with the express purpose of changing the user’s mental state is, to me, a far cry from business as usual. Moreover, I’m really hesitant to call an updating Facebook feed a “pre-existing dataset.” Finally, better people than I have talked about the lack of justification the Facebook user agreement provides.

This information, I hope, clarifies a couple of outstanding issues in the debate so far. Personally, I’d still like to see a lot more information about the kind of oversight this study received, and more details on the Cornell IRB’s analysis.

* Professor Fiske gave her consent to be quoted in this post.

Being and Becoming Compromised: An Article for Impact Ethics

Last weekend I wrote an article for Impact Ethics titled: “Being and Becoming Compromised: Conflicts of Interest in Bioethics“:

What is it that makes a conflict of interest more problematic than mere partiality? The answer, Rob MacDougall wants to argue, is “nothing.” Accepting money to argue on behalf of the pharmaceutical industry is no different than protecting the vulnerable; it doesn’t matter to which group you are partial, or why.

My article was a response to a post by Rob MacDougall, who argued that there is nothing problematic—in fact, that there is something right—about bioethicists advocating on behalf of (among others) pharmaceutical companies, and being paid to do so. “Being and Becoming Compromised” is the last in a series of articles on the topic; the other responses are listed below:

I—being clearly partial to these articles, though perhaps not compromised to the extent that I have a conflict of interest—recommend that you go and give them a read.

uBiome is determined to be a cautionary tale for citizen science

Ah, uBiome. Is it that time again already?

Scientific American Blogs recently featured an article by Jessica Richman and Zachary Apte, co-founders of the crowdfunded citizen science project known as uBiome. Richman and Apte were responding to critics of uBiome, which had secured $350,000 funding to conduct research without oversight by an Institutional Review Board (IRB).

The takeaway message of Richman and Apte’s article is that they continue to fail to grasp the ethics in ethics. They are certainly motivated to keep their legal bases covered, and they value their reputation. But their ethics seem to boil down to two contentions: a) openness science brings progress; b) progress is Good.

Yet if Richman and Apte care about openness, they could do a lot more to show it. The could have been more open, for example, about their study’s consent protocols. Indeed, their Indiegogo page claims that the consent forms would be released after IRB oversight. It would be a great show of faith to release the consent forms in the open.

They could also tell us what informed consent practices they are going to use. Perhaps something about the risk of sharing genomic data online, or the potential risks associated with sequencing your baby’s microbiome and handing it out to researchers. There are lots of important ethical questions to be asked about the research at uBiome, and citizen science in general. In the interest of openness, they could have directly engaged with those problems.

Next, uBiome tell us their research has been reviewed by an IRB; “the same institution that works with academic IRBs…private firms such as 23andme and pharmaceutical companies.” I presume they mean Independent Review Consulting (now Ethical and Independent Review Services), but again, it would be good to publicly release that information in the interest of openness.

The identity of uBiome’s IRB is is important because for-profit IRBs aren’t just “controversial,” as Richman and Apte want to claim. Providing ethics services for profit is problematic. It is thus important that uBiome release information about the IRB they used, and if possible give a fuller picture of what actually came from that. It would help bring us a tiny piece of transparency about research and IRBs—transparency they noted was lacking.

(Oh, and pro-tip: don’t ever, ever hold up 23andme as a standard for ethical conduct in research. Ever.)

Openness doesn’t seem to matter to the folks at uBiome for its own sake, but only insofar as it aligns with their research goals. But if openness is only valuable for their project, then they’ve failed to be innovative as scientists. Forget IRB 2.0: these kids aren’t even out of alpha. They’ve failed to grasp that ethics is more than just law. Ethics is about what you ought to do, not what you can get away with.

In doing so, uBiome are exactly same as scientists and clinicians like Professor Owen Wangensteen, who stated at the Mondale Hearings:

…If we are to retain a place of eminence in medicine, let us take care not to shackle the investigator with unnecessary strictures, which will dry up untapped resources of creativity.[1]

This said in 1968, during which both the Tuskegee syphilis experiment and the human radiation experiments were in full swing. uBiome aren’t innovating at all in their behaviour. They are perpetuating history—the tragic history—of research where ethics is a footnote, if it is present at all.

So here’s a suggestion for “IRB 2.0.” Embrace research ethics. Embrace it now, and embrace it fully. Make a commitment not just to the project and its purported benefits, but to well-achieved benefits. uBiome brags about its $350,000 in crowdfunding, but whines about the expense of IRBs. Yet they could have easily included IRB costs, and the necessity of an IRB, in their pitch on Indiegogo. Don’t whinge about the system—set an example. The added bonus is that you’ll then have authority when you pose the system should be changed, rather than sounding petulant.

Moreover, citizen science projects like uBiome could embrace ethicists, and communicate with them openly and honestly. It isn’t enough to say “let’s have a mini IRB and get ethics training for citizen scientists.” That certainly wouldn’t hurt, but people train long and hard to examine and critique research for its ethical implications. You can’t turn every citizen scientist into a research ethicist or a bioethicist. But we’re out there, and when uBiome isn’t fighting us on twitter we’re doing interesting work.

If uBiome is serious about being open, ethical, and innovative, they have to demonstrate that. Anything else is just so much noise on the internet.

[1]: A.R. Jonsen, The Birth of Bioethics (New York, NY: Oxford University Press, 1998), p. 93.

Remember Dan

Nine years ago, a young man called Dan Markingson killed himself with a box cutter. The gruesome act, in which Dan almost decapitated himself, was the final escape of a young man suffering from schizophrenia, enrolled in a study on pain of involuntary commitment, and medicated with a blind compound despite his deteriorating moods and behaviour. Dan left a note that simply said “I went through this experience smiling.

I didn’t encounter Dan’s story until the final year of grad school, while reading Carl Elliott’s White Coat Black Hat. The story in that work, and the growing material on the ‘net, reads as a laundry list of ethical violations—a vulnerable patient enrolled in a clinical trial even though he was deemed unable to take care of himself; researchers—one of whom was Dan’s psychiatrist—whose financial involvement as consultants to the study sponsor, AstraZeneca, were extensive; and a study coordinator who was later found to be making medical judgements and administering drugs without proper qualifications, to say nothing of falsifying signatures and changing Dan’s records after his death. Ignored at every turn despite her concerns, Dan’s mother, Mary Weiss, lost her son.

The story beyond Dan’s death is surreal. The University of Minnesota—through which the study was run—has, to date, countersued Mary, and refused to open an independent inquiry into the matter. The university’s General Counsel, Mark Rotenberg, has even threatened Carl Elliott’s academic freedom. Despite increasing evidence of wrongdoing, UMinn has refused to act.

Dan’s death was horrific; his abuse, unconscionable. Mary and Carl’s efforts have been frustrated at every turn, even though the state of Minessota passed legislation prescribing exactly what precipitated this tragedy—the enrolment of an individual under civil commitment into a clinical trial.  It is known as Dan’s Law.

Nine years is a long time to fight for justice.

A petition has been circulating since early this year that asks governor of Minnesota, Mark Dayton, to appoint an impartial, external panel to investigate Dan’s death. Please sign it. Do it because clinical research ought to be held to the highest ethical standard.  Do it because a university that privileges cashflow over truth and academic integrity isn’t worthy of the name. But most of all, do it because Daniel’s family deserve answers, justice, and closure.

Support Mary. Support Carl. Remember Dan.

Why we regulate (Or: what does Daniel Markingson have to do with Leslie Groves?)

petition started by Mike Howard has been circulating, calling for the University of Minnesota to investigate the clinical trial in which Daniel Markingson was enrolled at the time he committed suicide. I won’t attempt to give comprehensive treatment to Dan’s plight (which has already been covered in detail by my betters—see below for a list of some of my favourite articles); a (very) brief summary follows:

Daniel Markingon, a troubled and psychotic young man, was given a choice between involuntary commitment for his homicidal behaviour, and participation in the AstraZeneca-sponsored CAFE Study run through the University of Minnesota. Dan’s behaviour during the trial became increasingly erratic, but his mother, Mary Weiss, was unable to secure help for her son from the study coordinators. Dan killed himself with a box cutter.  Since then, the University of Minnesota has resisted attempts to establish an inquiry into the circumstances leading up to Dan’s death, despite increasing evidence of conflicts of interest and gross negligence in the running of the trial and the treatment of Dan.

This leads me to a recent post on the Institutional Review Blog. Zachary Schrag, in writing about signing the petition, cites Leslie Groves’ supervision of the construction of the Pentagon to prompt the intuition that “[o]versight committees can be at once nitpicking about small matters and inattentive to large concerns.” He then moves to half-chide, half-caution Carl Elliott, who has (at times single-handedly) pursued an investigation of the CAFE Study—an investigation of his own employer. Schrag concludes we risk creating more burdens for social scientists while doing nothing about issues in modern research ethics “But what is that compared to a chance for justice for Dan Markingson? I will sign the petition.”

The post is interesting, if bizarre, in the way if makes a sharp left-turn after the story about Groves. That Schrag signed the petition is laudable, but his concerns about the petition are harder to understand.

Carl, as quoted by Schrag, is no doubt aware of the way that IRBs can do nothing to protect research subjects. I’m sure he’s aware that more regulation can be burdensome, as Alice Dreger has pointed out, on social scientist and others whose methods are not necessarily those with which IRBs should be concerned.

Yet the concerns raised by Schrag are encapsulated in what it means for the fight against UMinn to fail to achieve its stated ends. Sure, there is a situation in which an investigation occurs into the CAFE Study, and nothing changes.  But that’s a failure condition. Justice for Dan can be achieved in some sense by a mere investigation or a successful petition, but that’s small beer compared to a legacy of change in Dan’s name.

This would be odd enough in itself, but what is more strange is the use of General Groves and the Pentagon as an example of how oversight can target the wrong things. Groves was bought in to save the Pentagon project. Many things over time have been contested about the General, but the thing beyond question is his character as a hardliner for achieving the goals of the projects he managed.  It was on this basis that Groves was later given the Manhattan Project—if anyone could harness frontier physics and the industrial capital of an entire nation in the pursuit of a weapon literally from the mind of a science fiction author, it was him.

So the episode Zachary is either misplaced, or he misses a crucial point in bringing it up. The Pentagon was already well over budget by the time Groves arrived on the scene, and Groves took the project from a floundering endeavour to a completed building. Concerns about the budget were a) already known in 1942, and b) weren’t important as far as Groves himself was concerned.  The oversight committee didn’t overlook the “larger stuff,” because that larger stuff was precisely why Groves was deployed.

Zachary overlooks that the budgetary problems in Groves’ case were acceptable to the House Committee only so long as he got the job done in time. That was his ultimate end; his telos, and the budget was in many ways secondary to Groves’ ability to get the Pentagon up and running. The telos of the IRB at UMinn is not being fulfilled. IRBs like the one that reviewed the CAFE study are not ensuring that research is pursued commensurate with the rights of participants. To paraphrase Carl, the oversight of clinical trials at UMinn failed because of “industry-funded university investigators ignoring research regulations, repeatedly failing to meet their ethical obligations, and fearing no sanctions whatsoever.” The rules exist and we know the mandate—but we aren’t able to provide oversight consistent with that mandate and that is a huge problem.

Budgets were, at least to a reasonable degree, ancillary to the overall project to build the Pentagon.  Research ethics is not, and ought not to be an optional extra to clinical research.

So it follows that Carl, I’m sure, would be very concerned if regulation made life worse for other researchers while not protecting people like Dan. This would be a failurebecause the legitimate aims of research and the IRB would still not be served. If the University of Minnesota repsonds to an inquiry by making life harder for geographers while failing to better secure against the terrible circumstances of Dan’s death, than we still fail to achieve justice for that troubled young man that died almost a decade ago.

The choice of case matters. Here, Zachary equates Dan’s life with the blown out budget of building the Pentagon.  But the blown out budget wasn’t the main concern in 1942—or at least, not in the same way. Dan’s life, and the lives of those who participate in trials after him (including the trial at UMinn currently being pursued by AstraZeneca), is the concern. Research that succeeds at the cost of unacceptable, uninformed risks to vulnerable people isn’t good research (whereas overbudget research might still be good research).  Making sure that doesn’t happen, as I read it, is Carl’s mandate.  Anything else is failure.

That’s why you should sign the petition, and that’s why you shouldn’t let your involvement end there.  Call your congressman, follow clinical trials in your own jurisdiction, and always ask for integrity from the research process. Make sure the aims of research are met, and met in the right ways. Make those big issues as important to researchers as the dust on the floor was to Groves.

Some light reading:

Carl Elliot, “The Deadly Corruption of Clinical Trials,” Mother Jones, September 2010

Ed Silverman’s coverage at Pharmalot

The Markingson archives at Mad In America

Judy Stone’s coverage at Scientific American: parts one, two, three, four, and five

Matt Lamkin, “The Markingson Case: Investigate the University of Minnesota,” Law and Biosciences Blog, March 2013