Tag Archives: science

How not to critique: a case study

The original title for this piece was “How not to critique in bioethics,” but Kelly pointed out that this episode of TWiV is a case study in how not to go about critiquing anything.

Last Monday I was drawn into a conversation/angry rant about an article by Lynn C. Klotz and Edward J. Sylvester, that appeared in the Bulletin of the Atomic Scientists…in 2012. After briefly forgetting one of the cardinal rules of the internet—check the date stamp— I realized the error of my ways, and started to inquire with my fellow ranters, in particular Matt Freiman, about why a 2012 article suddenly had virologists up in arms.

Turns out that the Bulletin article was cited by a study on dual-use authored by Marc Lipsitch and Alison P. Galvani; a study that was the subject of a recent post of mine . The Bulletin article draws from a working paper where the provide an estimate for the number of laboratory accidents involving dangerous pathogens we should expect as a function of hours spent in the laboratory. Lipsitch and Galvani use this figure in their analysis of potential pandemic pathogens (PPPs).

Freiman joined Vincent Racaniello, Dickson Despommier, Alan Dove, and Kathy Spindler on This Week in Virology (TWiV) on June 1 to talk about (among other things) Lipsitch and Galvani’s research. What followed is a case study in how not to critique a paper; the hosts served up a platter of incorrect statements, bad reasoning, and some all-out personal attacks.

I’d started writing a blow-by-blow account of the entire segment, but that quickly mushroomed into 5,000-odd words. There is simply too much to talk about—all of it bad. So there’s a draft of a paper on the importance of good science communication on my desk now, that I’ll submit to a journal in the near future.Instead, I’m going to pick up just one particular aspect of the segment that I feel demonstrates the character of TWiV’s critique.

“It’s a bad opinion; that’s my view.”

Despommier, at 58:30 of the podcast, takes issue with this sentence in the PLoS Medicine paper:

The H1N1 influenza strain responsible for significant morbidity and mortality around the world from 1977 to 2009 is thought to have originated from a laboratory accident.

The problem, according to Despommier, is that “thought to have originated” apparently sounds so vague as to be meaningless. This leads to a rousing pile-on conversation in which Despommier claims that he could have just easily claimed that the 1977 flu came from Middle Eastern Respiratory Syndrome because “he thought it;” he also claims that on the basis of this sentence alone he’d have rejected the article from publication. Finally, he dismisses the citation given in the article as unreliable because it is a review article,[1] and “you can say anything in a review article.”

At the same time, Dove notes that “when you’re on the editorial board of the journal you can avoid [having your paper rejected].” The implication here is that Lipsitch, as a member of the editorial board of PLoS Medicine, must have used that position to get his article to print despite the alleged inaccuracy that has Despommier so riled up. Racaniello notes that “[statements like this are] often done in this opinion–” his full meaning is interrupted by Despommier. It’s a common theme throughout the podcast, though, that Lipsitch and Galvani’s article is mere “opinion,” and thus invalid.

Facts first

If he’d done his homework, Despommier would have noted that the review article cited by Lipsitch and Galvani doesn’t mention a lab. What it does say is:

There is no evidence for integration of influenza genetic material into the host genome, leaving the most likely explanation that in 1977 the H1N1 virus was reintroduced to humans from a frozen source.[2]

So Lipsitch and Galvani do make an apparent leap from “frozen source” to “lab freezer.” Despommier doesn’t pick that up. If he had, however, it would have given us pause about whether or not is a valid move to jump from “frozen source” to “laboratory freezer.”

Not a long pause, however; there are other sources that argue that the 1977 strain is likely to have been a laboratory.[3] The other alternative—that the virus survived in Siberian lake ice—was put forward in a 2006 paper (note, after the publication of the review article used by Lipsitch and Galvani), but that paper was found to be methodologically flawed.[4] Laboratory release remains the most plausible answer to date.

The belief that the 1977 flu originated from frozen laboratory sources is widely held. Even Racaniello—at least, in 2009—holds this view. Racaniello argued that of multiple theories about the origin of the 1977 virus, “only one was compelling”:

…it is possible that the 1950 H1N1 influenza virus was truly frozen in nature or elsewhere and that such a strain was only recently introduced into man.

The suggestion is clear: the virus was frozen in a laboratory freezer since 1950, and was released, either by intent or accident, in 1977. This possibility has been denied by Chinese and Russian scientists, but remains to this day the only scientifically plausible explanation.

So no, there is no smoking gun that confirms, with absolutely unwavering certainty, that the 1977 flu emerged from a lab. But there is evidence: this is far from an “opinion,” and is far from simply making up a story for the sake of an argument. Lipsitch and Galvani were right to write “…it is thought,” because a plausible answer doesn’t make for unshakeable proof—but their claim stands on the existing literature.

Science and policy

The idea that Lipsitch and Galvani’s piece is somehow merely “opinion” is a hallmark of the discussion in TWiV. Never mind that the piece was an externally peer-reviewed, noncommissioned piece of work.[5] As far as TWiV is concerned, it seems that if it isn’t Science, it doesn’t count. Everything else is mere opinion.

But that isn’t how ethics, or policy, works. In ethics we construct arguments, argue about the interpretation of facts and values, and use that to guide action. With rare exception, few believe that we can draw conclusions about what we ought to do straight from an experiment.

In policy, we have to set regulations and guidelines with the information at hand—a policy that waits for unshakeable proof is a policy that never makes it to committee. Is there some question about the true nature of the 1977 flu, or the risks of outbreaks resulting from BSL–3 laboratory safety? You bet there is. We should continue to do research on these issues. We also have to make a decision, and the level of certainty the TWiV hosts seem to desire isn’t plausible.

Authority and Responsibility

This podcast was irresponsible. The hosts, in their haste to pan Lipsitch and Galvani’s work, overstated their case and then some. Dove also accused Lipsitch of research misconduct. I’m not sure what the rest of the editors at PLoS Medicine think of the claim—passive aggressive as it was—that one of their colleagues may have corrupted the review process, but I’d love to find out.

The podcast is also deeply unethical, because of the power in the platform. Racaniello, in 2010, wrote:

Who listens to TWiV? Five to ten thousand people download each episode, including high school, college, and graduate students, medical students, post-docs, professors in many fields, information technology professionals, health care physicians, nurses, emergency medicine technicians, and nonprofessionals: sanitation workers, painters, and laborers from all over the world.[6]

What that number looks like in 2014, I have no idea. I do know, however, that a 5,000–10,000 person listenership, from a decorated virologist and his equally prestigious colleagues, is a pretty decent haul. That doesn’t include, mind you, the people who read Racaniello’s blog, articles, or textbook; who listen to the other podcasts in the TWiV family, or follow the other hosts in other fora.

These people have authority, by virtue of their positions, affiliations, exposure, and followings. The hosts of TWiV have failed to discharge their authority with any kind of responsibility.[7] I know the TWiV format is designed to be “informal,” but there’s a marked difference between being informal, and being unprofessional.

Scientists should—must—be part of conversation about dual-use, as with other important ethical and scientific issues. Nothing here is intended to suggest otherwise. Scientists do, however, have to exercise their speech and conduct responsibly. This should be an example of what not to do.

Final Notes

I want to finish with a comment on two acts that don’t feature in Despommier’s comments and what followed, but are absolutely vital to note. The first is that during the podcast, the paper by Lipsitch and Galvani is frequently referred to as “his” paper. Not “their” paper. Apparently recognizing the second—female—author isn’t a priority for the hosts or guests.

Also, Dove and others have used Do Not Link (“link without improving ”their“ search engine position”) on the TWiV website for both the paper by Lipsitch and Galvani, and supporting materials. So not only do the hosts and guests of the show feel that the paper without merit; they believe that to the point that they’d deny the authors—and the journal—traffic. Personally, I think that’s obscenely petty, but I’ll leave that for a later post.

Science needs critique to function. Critique can be heated—justifiably so. But it also needs to be accurate. This podcast is a textbook example of how not to mount a critique.


  1. Webster, Robert G, William J Bean, Owen T Gorman, Thomas M Chambers, and Yoshihiro Kawaoka. 1992. “Evolution and Ecology of Influenza A Viruses” Microbiological Reviews 56 (1). Am Soc Microbiol: 152–79.  ↩
  2. ibid., p.171.  ↩
  3. Ennis, Francis A. 1978. “Influenza a Viruses: Shaking Out Our Shibboleths.” Nature 274 (5669): 309–10. doi:10.1038/274309b0; Nakajima, Katsuhisa, Ulrich Desselberger, and Peter Palese. 1978. “Recent Human Influenza a (H1N1) Viruses Are Closely Related Genetically to Strains Isolated in 1950.” Nature 274 (5669): 334–39. doi:10.1038/274334a0; Wertheim, Joel O. 2010. “The Re-Emergence of H1N1 Influenza Virus in 1977: a Cautionary Tale for Estimating Divergence Times Using Biologically Unrealistic Sampling Dates.” PLoS One 5 (6). Public Library of Science: e11184. doi:10.1371/journal.pone.0011184; Zimmer, Shanta M, and Donald S Burke. 2009. “Historical Perspective — Emergence of Influenza a (H1N1) Viruses.” New England Journal of Medicine 361 (3): 279–85. doi:10.1056/NEJMra0904322.  ↩
  4. Worobey, M. 2008. “Phylogenetic Evidence Against Evolutionary Stasis and Natural Abiotic Reservoirs of Influenza a Virus.” Journal of Virology 82 (7): 3769–74. doi:10.1128/JVI.02207–07; Zhang, G, D Shoham, D Gilichinsky, and S Davydov. 2007. “Erratum: Evidence of Influenza a Virus RNA in Siberian Lake Ice.” Journal of Virology 81 (5): 2538; Zhang, G, D Shoham, D Gilichinsky, S Davydov, J D Castello, and S O Rogers. 2006. “Evidence of Influenza a Virus RNA in Siberian Lake Ice.” Journal of Virology 80 (24): 12229–35. doi:10.1128/JVI.00986–06.  ↩
  5. I’m aware that peer review is not sufficient to make a work reliable, but absent evidence that the review process was somehow corrupt or deficient, it’s a far cry from mere opinion.
  6. Racaniello, Vincent R. 2010. “Social Media and Microbiology Education.” PLoS Pathogens 6 (10). Public Library of Science: e1001095.  ↩
  7. Evans, Nicholas G. 2010. “Speak No Evil: Scientists, Responsibility, and the Public Understanding of Science.” NanoEthics 4 (3): 215–20. doi:10.1007/s11569–010–0101-z.  ↩

Who is responsible for all those cranks?

So a running theme in my work is responsibility for communication—how we understand our obligations to communicate truthfully, accurately, and with the ends we seek. So it was with great interest that I watched Suzanne E. Franks (TSZuska), Kelly Hills (Rocza), and Janet D. Stemwedel (Docfreeride) hold a conversation in the wake of Virgina Heffernan’s “Why I’m a creationist.” I’m not going to spoil the amazing train-wreck that is Heffernan’s post; if you’d like to see some of the fireworks that ensued you should head across to watch the fallout as Thomas Levenson and Carl Zimmer got stuck in.

The conversation between Franks, Hills, and Stemwedel is interesting, I think, for the way they navigated Heffernan’s alleged status as a Foucauldian, and how this linked up with issues with postmodernism more generally. Postmodernism is an area I’ll leave to the experts above; what interests me is how we connect the bad apples, the cranks, and the downright malevolent with broader criticism of a field.

I’ll work with my own field, the analytic tradition of moral and political philosophy. It has all sorts of bad apples and problematic characters: we seem destined (the horror) to include people like Robert Nozick. Now I am about as anti-Nozick as it gets, but I can’t deny that he was an American political philosopher from the same cohort as John Rawls; someone whose theories are not so important to me as is one of his students, whom I count as a friend and mentor. I feel I have to kind of have to grind my teeth and allow Nozick as part of the “family,” albeit not a part I much like.

But do I have to own responsibility for every obnoxious kid that reads Nozick? And takes him seriously? Yikes. That sounds terrible. Yet perhaps in some cases I do—if there is a professor out there teaching that Anarchy, State and Utopia is God’s Divine Word, I’m probably stuck with their students as a product of my field’s “sins.” 

I will hang my head cop the criticism that analytic philosophy has produced a frightening amount of first-rate assholes in its time.

Will I, however, take responsibility for right-wing libertarians and their fascination with Adam Smith’s “invisible hand?” I’m hesitant—primarily because most libertarians just casually gloss over The Theory of Moral Sentiments and run straight for The Wealth of Nations, and that just seems like intellectual laziness and cherry-picking at its worst. I can be held for bad writing; bad theories improperly rebuffed; or teaching that is antiquated, bigoted, or just wrongheaded. But it is much harder, I think, to say that I am responsible because a whole group of people found it inconvenient to read the other half of a body of work.

These, I think demonstrate a (non-exhaustive) series of relations we might have with certain elements of our intellectual movements and traditions, who use common language to achieve results that don’t sit right with us.

We could, of course, just reject that someone is properly part of our practice. This is, I’ve no doubt, as much political as it is a question of whether someone’s practice possesses the necessary or sufficient conditions to be classed as part of one’s group—we want to be able to say that some practices that take our name are simply Doing It Wrong. Yet we don’t want to allow just anyone to get away with that at any time–it removes a powerful and often legitimate critique of being able to point at an element of some set of people and go “they are a problem, and they are indicative of some larger issue.”

Another way we could approach this is to say “well that’s an instance of Bad X, but Bad X is not the same as X being bad.” That’s an important part of managing a field’s boundaries. The problem, of course, comes in identifying at what point instances of Bad X become signs of X being bad. My co-supervisor, Seumas Miller, has done work on institutional corruption that I think would be interesting to apply here, but that’s a paper in itself.

Finally, we could acknowledge and go “that’s not just an instance of bad X, but a sign that there is something wrong with the way we practice or communicate X. We should fix that.” I’ve talked before about the need for responsibility in science writing, and that applies to my field as much as any other.

Which of these is Heffernan? Is she Just Doing It Wrong, a bad po-mo (but not evidence of po-mo being bad), or a sign of something more problematic? I don’t know. Franks, Hills, and Stemwedel, I think, cover all three possibilities. Maybe Heffernan is a combination, a hybrid, or something altogether than these three distinctions I’ve made.

I’ve thrown the original chat up in Storify for anyone who wants to see the source I’m working from. It has been edited inexpertly, but I hope not leaving anything important out (though I did leave out an entertaining discussion on Foucault and Emo-pop that you’ll have to track down on Twitter). You can find it here.

A World of Trouble: Untangling the Politics and Promise of Nuclear Power

This post was originally meant to appear over at The Curious Wavefunction as part of a debate between myself and Ash about the politics and perils of nuclear power. I was somewhat dismayed when I woke up this morning (timezones, remember) to find my response had been bracketed by editorial and response from Ash without my consent or knowledge. Ash agreed to take the post down, and my response without accompanying editorial is below. It is worth noting that, for me, this cuts to the heart of what I wrote on yesterday, in that when one has a platform—and here, editorial control over subject matter—one has to be exceedingly careful about how their input frames content.

In this specific matter, to take what I as an author consider quite a strong disagreement and begin my entry with a statement saying a) this is only a disagreement of degree rather than kind (as if that made the disagreement lesser), and b) that degree was not substantial, undermines the position I put forward here without giving me adequate room to respond. As such, I present my view here without editorial, and Ash’s response I trust will be in the usual place in due course.

On Tuesday, Ash over at Curious Wavefunction blogged about “Pandora’s Promise,” a new documentary about the intersection between nuclear power and the environmental movement.  I haven’t had a chance to see the documentary because despite the internet, Australia is still very, very far away in movie-miles. But Ash raises a number of interesting points about the state of nuclear power—and the institutions that surround nuclear power—that are worth talking about and investigating a little further.  In the interests of disclosure, I am pro-nuclear in principle, but I think that  a lot has to be done before nuclear becomes more credible as a solution to anything.

Ash is definitely onto something when he talks about how risk in nuclear science misunderstood, and what actually happens to people exposed to radiation beyond what we experience every day. He notes the complexity of assessing the impact of radiation: dosage, chemical variation, method of contact, decay products, and decay paths all create different results. Radiation physics is a menagerie of causes and effects, and it is too quick to make comparisons simply in terms of the quantity activity one is exposed to.

The problem is, this cuts both ways. The reason there is little—but not noevidence that low levels of radiation increase cancer risk is that safety standards have been kept high. But the absence of evidence is not evidence of absence, and that uncertainty, not to mention the costs of finding out, has to be factored into our assessment of nuclear power.  Moreover, to claim that radiation is all about context, and then give examples of contexts where radiation is safe, is somewhat problematic.

That lead us into the next problem—new reactor designs, even those that show promise are still a ways off, and have some serious hurdles to contend with.  Liquid salt plants and so-called “breeder” reactors have been plagued with problems from their inception; they are not new ideas at all, but rather old ideas long in wait of feasible and successful engineering. Liquid salts run at exceedingly high temperatures, and may utilise highly reactive metals like sodium as cooling systems. The corrosive and highly flammable sodium is difficult to contain and can wear away containment vessels at a surprising speed.  The Monju reactor in Japan, for example, has been plagued by controversy and safety concerns for almost 50 years. This isn’t simply technological lock-in; making breeders cost effective, safe, and efficient enough to go into widespread use is and continues to be an immense technical challenge.

Most Gen-IV reactors will rely heavily on plutonium, as it is a far more efficient method of causing fission, and arises from the byproducts of irradiating uranium. But this introduces a new, unstated element of risk—In addition to being radioactive plutonium is incredibly toxic. It binds in calcium sites around the body such as bone marrow, and has long decay chains comprising of varied intensities of radioactivity. Storage of plutonium for civilian fuel projects presents a proliferation risk, a health and safety risk, and an environmental risk.

Now, I use “risk” here because the obvious reply from Ash, or anyone else, is that compared to the costs of not  pursuing nuclear power, the above dangers are deemed more than acceptable. In “Pandora’s Promise,” and Ash’s post, the costs take the form of anthropogenic climate change, and the costs of that clearly outweigh any costs of nuclear power.

Again, I want to make it clear that I believe that climate change is happening, it is human-influenced, and the costs of not acting are likely to be severe. It is just that in using such an—admittedly very real and scary—extreme cost without qualification as the reason to pursue nuclear power skews our risk assessment somewhat.  It is the same type of cost that motivates arguments for geoengineering on a large scale or human enhancement so that our bodies are better adapted to survive the pernicious effects of climate change. With a big enough catastrophe, anything is fair game.

But nuclear power isn’t simply a scientific or engineering puzzle. Its main drivers, in the end, have not been public fear—plenty of nuclear weapons have been built despite opposition. Part of what stymies nuclear power is that the science and technology have been tightly controlled from the outset. This has led to a dearth of skills in the right areas: many scientists who might have otherwise contributed to the nuclear sciences in civilian matters were either snatched up by the weapons industry, or denied clearance to work with fissile materials.

Further, the strict nature of nuclear secrecy, among other regulatory levers, creates an environment in which—even once civilian nuclear energy became a plausible pursuit—vested interests had the ability to control the market in all sorts of problematic ways.  According to one source, today 10 utilities own 70% of the total nuclear capacity of the USA.

Changing these institutions, allowing innovation to happen securely, and introducing competition into the nuclear marketplace are as much social and political changes as they are technical.  These are some of the very real challenges to wider adoption of nuclear power, and these changes, I fear, are where we the industry may falter.

This shouldn’t be a deterrent, but it outlines the challenges associated with the nuclear power industry. The clicks heard by Szilard and Fermi on that fateful day in 1942 in Chicago had potential, and still do.  Everything after, I contend, has been as much hindrance as help. The truth about nuclear energy, if there is any, is that it is as much a complex political case as it is a scientific one. Ash does, to his credit, note this, but I think he downplays precisely the type of gap we are talking about. It would be a shame to let the promise of nuclear power pass untapped. Yet one only has to look at the state of the nuclear energy industry today to know that the current system needs a radical overhaul, and that will require a significant amount of capital and civic participation. When we consider the cost-benefit analysis of a project, political costs have to figure in somewhere.  And right now in the world of nuclear energy, those costs are exceedingly high.

Writing, Authority, and Responsibility

I watched, from the wings, as a rather heated discussion broke out between Kelly and Bora about the function of blogging and journalism, and in particular when a blogger acts under the banner of a purported  authority—in this case, the Scientific American blog site. The essence of the disagreement was about the authority and accountability of authors, and the consequences of imparting information to an audience from a position of power.

The whole thing, as I sat in my slightly woozy state (I’m suffering from some pretty chronic pain right now), reminded me of my first article—a piece on the responsibility of scientists to communicate information about science accurately. I’ve thrown a proof of the paper up on academia.edu for those interested.

The central argument of the paper—excusing my distinct lack of voice—is that scientists have a responsibility to ensure their work is interpreted correctly because:

  1. People are vulnerable to misrepresentation or misinformation (a general obligation toward communicative mindfulness, if you will);
  2. Scientists (or their proxies) have a special obligation because of the power they possess as specialists and professionals.
  3. Scientists (or their proxies)  have self-interested reasons to ensure their work is interpreted correctly.

The longer arguments are in the paper, and I’m really only interested in what follows from point two above. What about the journalists, bloggers, and science communicators that are an important part of the way that scientific knowledge—and importantly, the scientific enterprise—is communicated?

The answer, I think, is the same. If you communicate about science, and you do so with authority, you have a responsibility for what you produce. That authority might be through a PhD; a byline with a prestigious organisation; or just being known long enough, and by enough people, to count (i.e. possessing esteem).

In particular, science does things. It does lots of cool things, and just as many scary things. When you are in a position to influence how things get done, of course an attendant obligation follows. How could it not?

The piece that motivated the debate this morning was a piece that referred to the “truth” about nuclear power. Truth about a 75 year old (well, depending on which advance you take as “the beginning”), controversial, potentially dangerous set of technologies that have been mired in a number of very big explosions, an arms race, systemic corruption, and secrecy. And form part of a very expensive military-industrial project. Anyone who purports, under the name of an institution that has existed since 1845, to have truth about that sure as hell better know what they are doing. (I’m very skeptical that’s the case, but that’s for another post—tomorrow.)

The contention is, however (e.g. here and here), that bloggers need freedom to grow, expand their knowledge, strike out on their own, and make mistakes.

I didn’t see anyone doubt that.  It’s just that it is beside the point.

Need for growth doesn’t abrogate responsibility. Everyone needs an opportunity to make baby steps into new areas of expertise. But when your baby steps can be mistaken for firm, adult strides, you need to be careful. Writing under the byline of an institution like SciAm is a powerful force, and I believe that goes for the blog section as well by virtue of the reputation it leverages.

The reply, of course, is that in the age of interactivity, bloggers will be corrected in comments and growth can happen there: the writer’s equivalent of “many eyes make all bugs shallow.” But just like in software, or engineering, or physics, you need the right eyes, and then everyone else needs to see the fix. Bloggers don’t always post errata, and even if they do it can be too little too late. Readers don’t have time, energy, or (depending on the topic) the stomach to read through the comments and stitch controversy together. Follow up after follow up can cause fatigue (check out climate-change fatigue, kids!) And again, it misses the point that the bigger the authority, the larger the responsibility.

The responsibility that comes from possessing authority shouldn’t necessarily cap growth, but should be in the minds of writers as they strike out in new directions. Moving from a position of expertise to one with less isn’t problematic in itself, but if no one else gets the memo—or the writing doesn’t convey that change—then problems can and will occur. And as readers, knowledgable or otherwise, we should hold people accountable for what they write.