Tag Archives: ethics

Treating like cases alike: bioethics edition

Crash course in philosophy—treat like cases alike.[1]

That means that if you think a much-hyped article on the health benefits of chocolate is unethical, your reasons for this conclusion ought to apply consistently across other similar cases. Put another way, if you think John Bohannon acted unethically in publishing a study that claimed to show a relationship between the success of weight-loss diets and chocolate consumption, fooling the science journalism world; and if your reasons for thinking this bear on relevantly similar cases, you ought to arrive at similar conclusions about the ethics of those cases as you do about Bohannon.

1) For example, you might think—as I do—that whether or not the study was intended as a hoax, some human subjects protection ought to be enforced. You might worry that there’s no mention of research ethics review or informed consent in the journal article that caused the furor, or any of Bohannon’s writings since. Kelly asked Bohannon on io9 what kind of approval he got for the study, if any. I’ll update if anything comes of that.

But if you do have this concern, then such reasoning should lead you to treat this study conducted by Facebook with similar suspicion. We know that no IRB approved the protocol for that study. We know that people didn’t give meaningful consent.[2]

2) Say you are worried about the consequences of a false study being reported widely by the media. It is true that when studies are overhyped, or falsely reported, all kinds of harm can result.[3] One could easily imagine vulnerable people deceived by the hype.

But if you think that, consider that in 2013, 23andMe was marketing a set of diagnostics for the genetic markers of disease in the absence of any demonstrated analytic or clinical validity. They were selling test kits with no proof that the tests could be interpreted in a meaningful way.

I think that the impact of the latter is actually much greater than the former. One is a journalist with a penchant and reputation for playing tricks on scientific journals. The other is a billion-dollar, Google “we eat DARPA projects for breakfast” backed, industry leader.

But if Bohannon’s actions do meet your threshold for what constitutes an unacceptable risk to others, you are committed to concerns about any and all studies that could harm distant others that reach that threshold.


If you think Bohannon was out of line, that Facebook was unethical in avoiding oversight, and it was inappropriate for 23andMe to flaunt FDA regulations and market their genetic test kits without demonstrating their validity, then you are consistent.

If you don’t, that’s not necessarily a problem. The burden of proof, though, is now on you to show either a) that you hold one of the above reasons, but the cases are necessarily different; or b) that the reasons you think Bohannon is out of line are sufficiently unique that they don’t apply in the other cases.

Personally? I think the underlying motivation behind a lot of the anger I see online is that scientists, and science communicators, don’t like being made out as fools. That’s understandable. But if that’s the reason, then I don’t think Bohannon is at fault. Journals have an obligation to thoroughly review articles, and Bohannon is notorious for demonstrating just how many predatory journals there are in the scientific landscape today. Journalists have a responsibility to do their jobs and critically examine the work on which they report.

Treat like cases alike, and use this moment for what it should be: a chance to reflect on our reasons for making moral judgement, and our commitment to promoting ethical science.


  1. Like anything in philosophy, there are subtleties to this ↩
  2. Don’t you even try to tell me that a EULA is adequate consent for scientific research. Yes, you. You know who you are. I’m watching you.  ↩
  3. Fun fact: that was the subject of my first ever article!  ↩
Advertisement

#shirtstorm: men hurting science.

Or “why the signs and symbols that create the Leaky Pipeline are unethical, and compromise the integrity of science itself.” This post because Katie Hinde asked, and this is just as important as the other writing I’m doing.

If you float through my sector of the Internet, you’ve probably heard or seen something about #shirtstorm: the clown in the Rosetta project—which just landed a robot on a comet—who decided it’d be the height of taste to be interviewed wearing this:

Yes, that’s Rosetta scientist Matt Taylor in a shirt depicting a range of mostly-naked women. The sexist, completely unprofessional character of this fashion choice is pretty obvious. Taylor also doubled down by saying of the mission “she is sexy, but I never said she is easy.”

Way to represent your field, mate.

Better people than me have talked about why the shirt is sexist, why it marginalizes women, why the response is horrid, and why the shirt—as a sign—is bad news. I’ve also seen a lot of defenders of Taylor responding in ways that can be boiled down to “woah, man [because really, it is always “man”], I just came here for the science.”

But the pointy end of that is that this does hurt science. Taylor, and his defenders, are hurting science—the knowledge base—with their actions.

As a set of claims about the world, science is pretty fabulous for the way that claims can be subjected to the scrutiny of testing, replication, and review. Science advances because cross-checking new findings is a function of the institution of science. It’s a system that has accomplished also sorts of amazing things—including putting a robot on a comet.

Image courtesy Randall Munroe under the Creative Commons Attribution-NonCommercial 2.5 License

Yet science advances only as far, and as fast, as its membership. This has actually been a problem for science all the way back to before it was routinely called “science,” when STEM was more or less just “natural philosophy” (“you’re welcome”—Philosophers). When American science—particularly American physics—was getting started in the 19th century, it went through an awful lot of growing pains trying to institutionalize and make sure the technically sweetest ideas made it to the top of the pile. It is the reason the American university research system exists, the American Association for the Advancement of Science exists, and why the American PhD system evolved the way it did (and yes, at the time it was basically about competing with Europe).

Every country has their own history, but the message is clear—you only get science to progress by getting people to ask the right questions, answer those questions, and then subject those questions to a robust critique.

The problem is that without a widely diverse group of practitioners, you aren’t going to get the best set of questions, or the best set of critiques. And asking questions and framing critiques is highly dependent on the context and character of the questioners.

The history of science abounds stories in which the person is a key part of asking the question, even as the theory lives on when they die (or move on to another question). Lise Meitner in the snow, elucidating the liquid drop model of atomic fusion. Léo Szilard crossing the street, enlightened by the progression of traffic lights into the thought of the nuclear chain reaction. Darwin and his finches. Goodall and her chimpanzees. Bose and his famous lecture that led him to his theories of quantum mechanics.

The point is that the ideas of great scientists, and the methods they use, depend on the person. Where they came from; how they experience the world. In order to find the best science, we need to start with the most robust starting sample of scientists we can.

When people are marginalized out of science—women, people of color, LGBQTI people, people with disabilities, people of other religions—the sample size decreases. Possible new perspectives and research projects vanish from science, because a bunch of straight white dudes just can’t think of it. That’s bad science. That’s bad society.

This has real, concrete implications for science and medicine. Susan Dodds, a philosopher and bioethicist at the University of Tasmania, has a wonderful paper called “Inclusion and exclusion in women’s access to health and medicine” (You can find the paper here). Dodds notes that the way our institutions are set up, access to healthcare and medical research is limited by the role of gender. Women’s health issues—again, in care and research—tend to be sidelined unless it has something to do with reproduction. This is to the point that research ostensibly designed to be sensitive to sex and gender often asks questions and uses methodology that limit the validity of experimental results to women, individually or as a group. The scientific community quite literally can’t answer questions properly for lack of diversity, and asks questions badly from an excess of sexism.

You can imagine how that translates across fields, and between different groups that STEM has traditionally marginalized.

So when you defend Matt Taylor, allow people to threaten Rose Eveleth, and tolerate the vitriol that goes on against women—in STEM and out of STEM—you limit the kinds of questions that can be asked of science, and the ways we have of answering those questions.

You corrupt science. You maim it. You warp it.

I realize this shouldn’t be a deciding factor—Matt Taylor’s actions are blameworthy even if he wasn’t engaged in a practice that contributes to the maiming of science. But for those who can’t be convinced by that, who “just want to be about the science,” take a good, long hard look at yourself.  If the litany of women scientists who never got credit for their efforts wasn’t bad enough, there are generations of women scientists—Curies, Meitners, Lovelaces, and Bourkes—that never were. We’re all poorer for that.

So next time you want to be “just about the science,” tell Matt Taylor to stick to the black polo.

National Security and Bioethics: A Reading List

Forty-two years ago, in July 1972, the Tuskegee syphilis study was reported in the Washington Star and New York Times. Yesterday, a twitter chat hosted by TheDarkSci/National Science and Technology News Service, featuring Professors Ruha Benjamin and Alfiee M. Breland-Noble, discussed bioethics and lingering concerns about medical mistrust in the African-American Community. It was an excellent event, and you can read back through it here.[1]

Late in the chat, Marissa Evans expressed a desire to know some more about bioethics and bioterror, and I offered to post some links to engaging books on the topic.

The big problem is that there aren’t that many books that specifically deal with bioterrorism and bioethics. There are a lot of amazing books in peace studies, political science, international relations, history, and sociology on bioterorism. Insofar as these fields intersect with—and do—bioethics, they are excellent things to read. But a bioethics-specific, bioterror-centric book is a lot rarer.

As such, the readings provided are those that ground the reader in issues that are important to understanding bioterrorism from a bioethical perspective. These include ethical issues involving national security and scientific research, dangerous experiments with vulnerable populations, and the ethics of defending against the threat of bioterror.

The Plutonium Files: America’s Secret Medical Experiments in the Cold War. If you read one book on the way that national security, science, and vulnerable people do not mix, read Eileen Welsome’s account of the so-called “Human Radiation Experiments.” Read about dozens of experiments pursued on African Americans, pregnant women, children with disabilities, and more, in the name of understanding the biological properties of plutonium—the fuel behind atomic bombs. All done behind the great screen of the the Atomic Energy Act, because of plutonium’s status as the key to atomic weapons.

Undue Risk: Secret State Experiments on Humans. A book by my current boss, Jonathan D. Moreno, that covers some of the pivotal moments in state experimentation on human beings. The majority of the cases Moreno covers are those pursued in the interests of national security. Particularly in the context of the Cold War, there was a perceived urgent need to marshall basic science in aid of national security. What happened behind the curtain of classification in the name of that security, however, was grim.

Biohazard: The Chilling True Story of the Largest Covert Biological Weapons Program in the World–Told from Inside by the Man Who Ran It. Ken Alibek is hardly what you’d call a reliable narrator; then again, I can’t imagine what being part of a crack Soviet bioweaponeer unit would do to a person.[2] Nonetheless, it is a foundational read in the immense bioweapons enterprise that was built from the 1970s till the end of the Cold War.

Innovation, Dual Use, and Security: Managing the Risks of Emerging Biological and Chemical Technologies The late Jonathan B. Tucker released this edited volume in 2012; while the debate about dual-use in the life sciences has progressed since then, it is still one of the most thoughtful pieces on the topic of bioterrorism, biological warfare, and the governance of the life sciences out there. It is also accessible in a way that policy documents tend not to be. That’s significant, as the book is a policy document: it started out as a report for the Defense Threat Reducation Agency.

This list could be very long, but if I were to pick out a selection of books that I consider essential to my work, these would be among the top of the list.

As an addendum, an argument emerged on the back of the NSTNS chat about whether science is “good.” That’s a huge topic, but is really important for anyone interested in Science, Technology, Engineering and Mathematics and their intersection with politics and power. As I stated yesterday on Twitter, however, understanding whether “science is good” requires understanding what the “science” bit means. That’s not altogether straightforward.

Giving a recommendation on that issue involves stepping into a large and relatively bitter professional battle. Nonetheless, my first recommendation is always Phillip Kitcher’s Science, Truth, and Democracy. Kitcher carefully constructs a model of where agents interact with scientific methods and tools, and so identifies how we should make ethical judgements about scientific research. I don’t think he gets everything right, but that’s kind of a given in philosophy.

So, thousands of pages of reading. You’re welcome, Internet. There will be a test on Monday.


  1. I’ll update later with a link to a Storify that I believe is currently being built around the event.  ↩
  2. Well, I can. It is called “Ken Alibek.”  ↩

That Facebook Study: Update

UPDATE 30 June 2014, 8:00pm ET: Since posting this, Cornell has updated their press release to state that the Army did not fund the Facebook study. Moreover, Cornell has released a statement clarifying that their IRB

concluded that [the authors from Cornell were] not directly engaged in human research and that no review by the Cornell Human Research Protection Program was required.

Where this leaves the study, I’m not sure. But clearly something is amiss: we’re still sans ethical oversight, but now with added misinformation.

 ***

So there’s a lot of news flying around at the moment about the study “Experimental evidence of massive-scale emotional contagion through social networks,” also known as That Facebook Study. Questions are being asked about the ethics of the study; while I want to post a bit more on that issue later, a couple of facts for those following along.

Chris Levesque pointed me to a Cornell University press release noting that the study in question received funding from the US Army Research Office. That means the study did receive federal funding; receipt of federal funding comes with a requirement of ethics oversight, and compliance with the Common Rule. It is also worth noting that the US Army Research Office has their own guidelines for research involving human subjects:

Research using human subjects may not begin until  the U.S. Army Surgeon General’s Human Subjects Research Review  Board (HSRRB) approves the protocol [Article 13, Agency Specific Requirements]

and

Unless otherwise provided for in this grant, the recipient is expressly forbidden to use or subcontract or subgrant for the use of human subjects in any manner whatsoever [Article 30, “General Terms and Conditions for Grant Awards to For-Profit Organizations“]

***

I’ve also been in touch with Susan Fiske, the editor of the study. Apparently, the Institutional Review Board (IRB) that approved the work is Cornell’s IRB. That IRB found the study to be ethical:

on the grounds that Facebook filters user news feeds all the time, per the user agreement. Thus, it fits everyday experiences for users, even if they do not often consider the nature of Facebook’s systematic interventions. The Cornell IRB considered it a pre-existing dataset because [Facebook] continually creates these interventions, as allowed by the user agreement (Personal Communication, Fiske, 2014).*

So, there’s some clarification.

Still, I can’t buy the Cornell IRB’s justification, at least on Fiske’s recounting. Manipulating a user’s timeline with the express purpose of changing the user’s mental state is, to me, a far cry from business as usual. Moreover, I’m really hesitant to call an updating Facebook feed a “pre-existing dataset.” Finally, better people than I have talked about the lack of justification the Facebook user agreement provides.

This information, I hope, clarifies a couple of outstanding issues in the debate so far. Personally, I’d still like to see a lot more information about the kind of oversight this study received, and more details on the Cornell IRB’s analysis.

* Professor Fiske gave her consent to be quoted in this post.

How safe is a safe lab? Fouchier and H5N1, again

A quick update on gain-of-function; I’m between papers (one submitted, one back from a coauthor with edits), and gain-of-function is back in the news.

Ron Fouchier and Yoshihiro Kawaoka have responded to a new paper on gain-of-function, which appeared in PLoS Medicine last week. The paper, by Marc Lipsitch and Alison P. Galvani, takes three popular conceptions of GOF research to task: 1) that it is safe; 2) that it is helpful in the development of vaccines; and 3) that it is the best option we have for bettering our understanding of flu viruses. I’m going to concentrate, here, on Fouchier’s reply to concerns about safety when performing gain-of-function research.

For context, Lipsitch and Galvani argue that

a moderate research program of ten laboratories at US BSL3 standards for a decade would run a nearly 20% risk of resulting in at least one laboratory-acquired infection, which, in turn, may initiate a chain of transmission. The probability that a laboratory-acquired influenza infection would lead to extensive spread has been estimated to be at least 10%.

A result that Fouchier calls  “far fetched.” His reason?

The data cited by Lipsitch and Galvani include only 11 laboratory-acquired infections in the entire USA in 7 years of research, none of which involved viruses, none of which resulted in fatalities, and—most importantly—none of which resulted in secondary cases.

But Fouchier’s objection misses a couple of key points. The data Lipsitch and Galvani use comes from a 2012 paper in Applied Biosafety, which looked that the reported number of thefts, losses, or releases (TLR) of pathogens in American labs from 2004–2010. What they found was that TLRs have been increasing steadily, from 57 in 2007, to 269 in 2010. One reason given by Lipsitch and Galvani is that in 2008, significant investment was made in outreach and education about the need for accurate reporting about TLRs.

The other likely culprit? The proliferation of BSL–3 and BSL–4 laboratories.

Between 2004 and 2010, the number of BSL–3 laboratories in the USA—that we know of—more than tripled, from 415 to 1,495. This is likely an underestimate, however, because there is no requirement to track the number of BSL–3 labs that exist. The number of BSL–4 labs has also increased to 24 worldwide; 6 of these are in the USA. When you get an explosion of labs, you are likely to get a corresponding increase in laboratory accidents. That there have only been eleven laboratory acquired infections reported from hundreds of releases should cause us relief; but it doesn’t show that gain-of-function research is safe.

The problem, unacknowledged by Fouchier, is one of scale—as the number of labs increases, the number of experiments with dangerous pathogens (including H5N1) is likely to increase. Laboratory accidents are not a matter of if, but a matter of when. Significantly increasing the number of labs adds significantly to that “when.” And, in the words of Lipsitch and Galvani,

Such probabilities cannot be ignored when multiplied by the potential devastation of an influenza pandemic, even if the resulting strain were substantially attenuated from the observed virulence of highly pathogenic influenza A/H5N1.

More problematic is that Fouchier fails to note that laboratory containment failures have killed in the last decade—just not in the USA. In 2004, a researcher from the Chinese National Institute of Virology fell ill with SARS. She, while ill, infected both a nurse, who infected five others; and her mother, who died.

This represents, contrary to Fouchier’s claim, a laboratory containment failure that was a virus, did cause secondary (and tertiary) infections, and that killed. The study on which Lipsitch and Galvani base their estimate only covered the USA. A worldwide view is likely to be more concerning, not less. That’s why Lipsitch and Galvani refer to their estimate as “conservative,” and why for years there have been calls for heightened security when it comes to pandemic influenza studies.

Fouchier is right that scientific research has never caused a virus pandemic. But it’s a glaring logical error, to say nothing of hubris, to use that claim to dismiss the concern that it could happen.

There are other issues to be had with the responses of Fouchier and Kawaoka, but the overall message is clear: they continue to dismiss concerns that gain-of-function studies are risky, and provide little in the way of tangible benefits in the form of public health outcomes. I’m hoping for a more thoroughgoing reply from Lipsitch, who earlier this morning expressed some interest in doing so. This is an important debate, and needs to be kept open to a suitable range of voices.

On Science and Mindfulness

I’m a practitioner of Buddhism, not a scholar of Buddhism. As someone who writes about responsibility in writing, I want to acknowledge first and foremost that I’m speaking from the position of a single practitioner who comes from a small branch of a particular family in a particular school of Buddhism.

Before writing this, I sat with my left foot atop my right thigh in the “half-lotus” position. I sat for forty minutes, breathing in and out through my nose, my concentration set—as much as my will would allow—on the point three finger-widths below my navel: a point the Chinese refer to as dan tien. I meditated as a practitioner of the Cao Dong family of Ch’an Buddhism; with an eye to cultivating mindfulness and, eventually, to finding my own enlightenment.

I didn’t come to mindfulness through Buddhism, however. In fact, it was the other way around, and via a route at which many other Buddhists would probably raise an eyebrow. I started cultivating mindfulness through martial arts. Among other things, Ch’an is associated with the Shaolin Temple, and can count the martial arts as both a tool of, and an entry point into, the practice of Buddhism. It is common in my tradition that the boxing and mindfulness come first, and the Buddhism comes later.

So when Jan Henderson kindly passed on an editorial decrying science’s Not So Sudden Interest in mindfulness, I was intrigued. Richard Horton, editor-in-chief of The Lancet, responded to Kate Pickert’sThe Mindfulness Revolution,” (also here) saying:

“It may be a mistake to dissect [Buddhism], discarding Buddhism’s broader ideas on wisdom and morality (even its ultimate purpose), and putting only a single part to use—even if it is a very good use.”

That is, the surge of interest in the cultivation of mindfulness by Silicon Valley, the NIH, and even the Defense Advanced Research Projects Agency, is somehow undermining the greater purpose of mindfulness as an element of Buddhism, and that this is a Bad Thing.

As a Buddhist, I have to respectfully disagree.

Horton’s article does raise serious questions about the current interest in mindfulness, but whether or not it should remain a part of Buddhism isn’t really one of them. For one, even if the term “mindfulness” is by and large associated with Buddhism, the types of things to which we refer when talking about mindfulness are not exclusively Buddhist. Being more aware of your surroundings, your own body, and those around you can be found in a great number of the religions of the world, and a range of comprehensive practices beyond the religious.

Moreover, as a Buddhist I’m committed to the idea that human beings shouldn’t suffer as much as they do. If mindfulness offers a way to alleviate some of that suffering then that is a great, great thing. I’d gladly see the same well-being I’ve experienced through my practice improve the lives of others. Mindfulness is a part of Buddhism, but it is also valuable in its own right.[1]

It is imperative, however, to subject practical methods to empirical scrutiny. In my tradition, it isn’t enough to just do—or not do—because someone else is doing or not doing it. You must subject practice, rigorously and straightforwardly, to your own experience.

People can critically evaluate a practice on their own, but science also provides us a powerful tool in doing this. Mindfulness is vulnerable to being co-opted by a range of more or less spurious characters and self-styled gurus. If science can help confirm what works and doesn’t work, that’s a good thing. At my most optimistic, nothing would make me happier than the demonstration, in a systematic fashion, that mindfulness training is effective, safe, and cost-effective, and thus worthy of consideration by, say, the Medical Services Advisory Committee as a legitimate practice worthy of subsidization by the Australian Government.

Of course, science’s involvement with mindfulness could be extremely problematic. It would be unfitting for scientists to claim that they have “discovered” mindfulness. As Ben Lillie has noted in the case of science communication, science doesn’t do anyone any favors when it “discovers” millenia-old practices that clearly work. Science can’t discover mindfulness—that’s been done. It can, however, help us explain certain aspects of mindfulness in new ways, and give us a particular perspective from which to examine and critique mindfulness.

There is something important in Horton’s editorial that I’m sympathetic to as a Buddhist, but I don’t think you need to be a Buddhist to have this concern. Pickert’s article is full of stories about people using mindfulness, but these stories are often about the rich and privileged. In these stories, mindfulness is often a tool they use so they can go back and do even more work. On my reading, the stories of “X uses mindfulness to be a better innovator/entrepreneur/thought leader/whatever” lay in tension with the latter section of her article, on the possibility that mindfulness could just be a good thing, even if it doesn’t help you make that next killer app.

Make no mistake, I think that mindfulness in one’s professional life is a good thing; I’m a firm believer that moralizing another’s mindfulness is deeply problematic. Nonetheless, I do worry that this new surge in mindfulness will remain primarily a tool of the privileged, even as I say that I find it exciting that it is being used at all. As a Buddhist, and also someone who believes in robust and equitable public health including mental health, I think we can do much better than that.

It is from there that I think our critique should begin. Mindfulness without Buddhism—as far as I’m concerned, mindfulness within Buddhism—is a tool for navigating the world. And like most tools, there are better or worse uses. We should be aiming, then, to make sure that mindfulness training and practice, scientifically informed and pursued, is distributed fairly, and approached carefully.

We should be mindful about how we practice, conceive of, and wield mindfulness.


  1. Though I’d say he needs to be a little clearer on that connection. The role, place, and meaning of mindfulness isn’t the same for every school in Buddhism, and one of the issues I had with Horton’s account is that it gave a remarkably monolithic view of Buddhism. In my experience, nothing is further from the truth. I won’t pursue that further here, however.  ↩

The corrupt, and the corrupted

Corruption is something that’s hard to describe, but we usually “know it when we see it.” Justice  Stewart’s words were originally intended for pornography, but corruption suffers from an indeterminacy of its own. Even when we agree that corruption is happening, it is sometimes difficult to know what it is that is corrupt, and what is doing the corrupting. This last bit—what does the work of corruption—is often a matter of debate.

That’s the subject of Lawrence Lessig’s Daily Beast article today, which makes use of an exchange between Senators John McCain (R-AZ) and Mitch McConnell (R-KY). McCain, in 1999, claimed that the roll of campaign contributions of the size experienced in the USA “corrupted our political ideals;” an idea that McConnell objected to by demanding to know who was corrupt. Lessig argues, in his article, that even in the absence of a corrupt person, a system may still be corrupt. Lessig argues that McCain’s claim

…is not about bad people doing bad things. The complaint is against a bad system, which drives good people to behave in ways that defeat the objectives of the system as a whole.

It is an interesting notion, and an important one—that systems can be corrupt, even when people aren’t. Institutional arrangements, when corrupt, can drive people in directions that are unfavorable.

I think, however, that Lessig is too quick in what he attributes to McCain. Though he later reneged, denying “that any individual or person is guilty of corruption in a specific way,” McCain claims that the campaign contribution system at present is a corrupting influence—one that corrupts everyone:

In truth, we are all shortchanged by soft money, liberal and conservative alike. All of our ideals are sacrificed. We are all corrupted. I know that is a harsh judgment. But it is, I am sorry to say, a fair one. And even if our own consciences were to allow us to hide from it, the people we are privileged to serve will not.

The importance of this comment can’t be understated. McCain isn’t necessarily saying that anyone is corrupt. He is however, saying that he and others are corrupted, and that they’ve been compromised in some way [1]. I’ve talked about being compromised elsewhere, but here I want to pull apart this notion of being corrupted—compared to being corrupt—a bit more.

The whole point Lessig wants to make is that institutional arrangements—for example, the effect of campaign contributions on the political process—change behavior. But in talking about the ways that corrupt institutions pervert individual actions, it is still important to talk in the language of individuals. The effect that McConnell wanted to (wrongly) identify as unidirectional—corrupt people cause corrupt institutions—flows in the other direction as well. Corrupt institutions do bear on individuals, as Lessig claims, but in doing so it can leave them corrupted, or even corrupt.

The difference between being corrupted, and being corrupt, is—if anything—very fine indeed. It seems plausible, however, to say that someone has been corrupted, or has had their actions corrupted, even if we don’t want to go so far as to say they are corrupt. That is, they’ve been compromised, but they do so under duress, and with little other options available.

This, of course, is a fine line to tread. Means don’t justify ends, but if your means are so limited as to make a problematic means the only way to a desirable end, then we make the best with what we have. Generally, we probably want to elect representatives who intend to bring about good outcomes, and promote reforms. These representatives may be compromised in doing that—no-one is perfect—but it seems that sometimes the price of the right person not stepping into that arena is too high. 

So maybe, in principle, we can have a corrupt system, with corrupted, compromised members, but no one genuinely corrupt. Of course, that seems somewhat optimistic: if there are people in congress actively opposing reform designed to redress a corrupt and corrupting institution, then they are most likely corrupt. Moreover, those who exploit a particular corrupt institutional arrangement for their own gain are certainly corrupt.

I don’t mean corrupt in the sense of unlawful activity, and I certainly believe that this is what McCain was attempting to cover for when denying that anyone was guilty of corruption. Representatives, Senators, and presidential nominees can accept and use campaign contributions, and do so in a lawful manner. Lobbyists aren’t breaking laws. But they, at times, doing the wrong thing.

We should also keep in mind that those who become compromised are still doing something wrong, even if there is no or little other alternative. And insofar as they are overly tardy in assisting in rectifying the system that corrupts them, they should be held to account. Hopefully, enough of these corrupted people will help in the reforms Lessig is championing, such as the American Anti-Corruption Act (ACA).

What legislation like the ACA hopes to accomplish is the make the unethical, unlawful. That’s a great thing. But it is also important to call out the individually corrupt, and recognize the corrupted, when we see them. Institutional reform is vital, but regulation and law rarely make corruption go away by themselves—corruption often occurs within the scope of lawful activity, and the genius of motivated people out to pervert something good can be nothing more than breathtaking. Preventing corruption does require regulation and legislation, but also requires vigilance and loud voices. We should make sure that we pay attention to the individual, as well as the institutional; to the corrupted, as well as the corrupt.

[1] McCain uses the term “corrupt” in three different ways in that speech. In his first usage, he refers to the Government as corrupt. In his second and fourth, he refers to the presence of campaign contributions being a corrupting factor. In his third—used in the quote above—he refers to the representatives as corrupted.

States of Emergency: What You Should Know, and What You Should Do

First: information. Check here for a map of the fire and its spread; monitor the Rural Fire Service (RFS) for updates and advisories. Listen to ABC radio for breaking news if you are in the car. And please, please, be careful. You cannot salvage your ruined property and life if you are dead.

There are currently 58 fires burning across New South Wales, of which 14 are out of control. The fires have taken out hundreds of homes and killed one person, with continued temperatures and winds making this fire season the most brutal in 45 years. The largest fire, at Lithgow, has taken out 40,000 hectares land, and is on course to merge with others fires in the region to create a mega-fire: a fire that “exhibits fire behaviour characteristics that exceed all efforts at control, regardless of the type, kind, or number of fire fighting resources deployed.”

Premier Barry O’Farrell has declared a state-wide state of emergency in response to the fires, and I want to explain what that entails. States of emergency are often contentious and misunderstood in civil life as—especially in this country—we aren’t used to wars, pandemics, or catastrophes on our shores. Yet understanding the state of emergency will help people understand what they should and should not do.

States of emergency in NSW are described by division 4 of the State Emergency and Rescue Management Response Act of 1989. Under the Act, police and emergency workers can evacuate people and destroy or appropriate property, or cut off power, in aid of fighting the fires and protecting public safety. It is an offence to exercise noncompliance or disobedience with personnel engaged in the emergency response, and responders are authorised to use “reasonable force” to achieve their goals. Responders, moreover, are not held liable for acts undertaken in good faith and in aid of the response effort. The Act also contains provisions for people affected by the emergency response to claim compensation on property damaged by responders.

The first thing is to understand the threat that justifies the state of emergency. We’ve had major fires in Australia since time out of mind, but these are the worst in NSW in almost half a century. Further, a mega-fire is not an out of control fire; it is an uncontrollable fire. This type of threat necessitates a response above and beyond typical fire fighting, and it is that need that justifies a state of emergencies.

This isn’t martial law, however, and you do have rights. There are lots of provisions within the Act to ensure compensation should you be affected by an emergency action. However, noncompliance is a crime; obstructing responders puts you, responders, and whole communities in jeopardy. Complying in an emergency sucks for everyone—responders don’t want to be in this position any more than you—but the risks to everyone should you not comply are immense. If you feel you’ve been coerced in bad faith, you should definitely sue for compensation. Just make sure you do it alive and well after the fires, and not posthumously.

Fire fighting is not necessarily something that is subject to intuitive explanations, and the expertise of those responding should be respected. Australia has some of the best people in the world when it comes to fire prevention, preparation, and management. Don’t undervalue their skills, and listen to their directions.

The 2013-2014 fire season is incredibly dangerous, and the potential costs for not aiding emergency responders through your cooperation are very high. Hopefully, they’ll never have to use the powers they’ve been granted; responders know that the longer they have to fight the more chance they have of dying, and they want this over as much as you do. So know what your rights are and how to enforce them, but also know the right time to do so.

If you want to help, get informed and follow the instructions over at the RFS website. Include your pets. And if you can, check the map and stay well away from the fire zones.

My dad’s partner lives out at Pheasant’s Nest, which is in the path of the fires in the Southern Highlands. Last time I heard they were prepared and still safe, but the next couple of days will be tense. My thoughts go out to them.

Professionalism in Science Writing

If you’re here, there is a good chance you know what I’m talking about. Bora Zivkovic, former editor at Scientific American and cofounder of the ScienceOnline conference, sexually harassed a number of women: of those who named Zivkovic and identified themselves, we know Monica Byrne, Hannah Waters, and Kathleen Raven. The circle of Twitter I occupy has veritably exploded with the news, and I suspect will—should—continue to discuss and work with these revelations for some time to come.

Kelly Hills, writing about biologist Dr. Danielle Lee being called a whore for refusing to write for Biology Online for free, mentioned a post of mine on science writing, authority, and responsibility. Considering the events of the last week, it seems an apt time to revisit issues of authority and power in the science writing community.

Talking with Thomas Levenson and Joanne Manaster, I claimed that I thought Zivkovic is, and given what we know, was bad at his job. This proved divisive, and I want to paint a clearer picture about what I mean in light of an idea building from authority and responsibility: professionalism.

Of Professionals and Professionalism

Science writing—journalism, blogging, communication—is an essential activity promoting a moral good. The scientific enterprise promotes value both as it generates knowledge, and allows that knowledge to be used to improve people’s lives. That knowledge, however, only realises a small part of its value when kept wrapped up in papers and conference proceedings. We need good ways of disseminating scientific knowledge, in order to promote science and its benefits, inform citizens of what happens to part of their taxes, and promote general education (which has a whole suite of follow-on benefits).  Science writing sits at the intersection between the lab coat and the person in the street; at its best it can make real differences to people’s lives.

When I look at science writers as a group, I see people pursuing the morally important activity of disseminating of scientific knowledge. They use a special set of skills: I challenge anyone who has tried to write to deny the significance of the skill of writing well. They teach each other the tools of their trade through collaboration, mentorship, conferences and social networks. And finally, they need to be autonomous to pursue their trade.

In my line of work, we call those people professionalsThe term is typically used to describe doctors, lawyers, and (historically) the clergy, but journalism is very much like a profession. Science writing fits even better into this paradigm, by virtue of its subject matter.

What Zivkovic did, however, was unprofessional in the extreme. Now, not every act of wrongdoing by a professional makes them a bad professional—a doctor cheating on their spouse doesn’t make them a bad doctor. Rather, as Hills has already noted, Zivkovic abused his power: the power he had as mentor and gatekeeper to the science communication world. By diminishing the self-worth of people vulnerable to him—by virtue of the role he occupied in their professional lives—he acted contrary to the institution in which he resided.

Zivkovic has also harmed the community at large—the “collateral damage” of which Janet Stemwedel writes. That, to me, is one of the lessons of the  #ripplesofdoubt hashtag. Even if Zivkovic’s abuses of power didn’t pervert his judgements about the quality of writers and their work (and there have been serious questions asked about the degree to which it has), the mere possibility is enough to cause havoc within the community. People who abuse their power change the communities they inhabit as much by their actions as by their omissions; Zivkovic’s transgressions were a corruption of the role he held. This is what made him bad at his job.

The Road Ahead

The revelations about Zivkovic’s actions have opened a wider conversation about the overall direction of the science writing community. Chad Orzel recently pointed out that science blogging has become “less a medium than an institution;” he’s also pointed out that ScienceOnline has become caught in between the image that “everyone is equal in the big happy Science Online family,” and the power structures that certainly exist within the community. Hills has also noted that the image of ScienceOnline as a group of friends hanging out actually make inclusiveness more difficult. The question of where ScienceOnline goes in the wake of Zivkovic’s actions has dovetailed into a larger discussion about what ScienceOnline should ultimately look like.

I believe that incorporating professionalism will improve the community’s ability to hold perpetrators accountable, and secure against further harassment. It will also help focus questions about what the community should strive to be. As ScienceOnline looks to continue its mission—and Scientific American, I sincerely hope, does a bit of soul-searching of its own—knowing what to fix can be aided by reference to what great practice must look like.

A vibrant professional culture in science writing, to me, means offering a diverse and inclusive set of perspectives. It also means having the processes to foster and encourage individuals with those perspectives to pursue both the deep knowledge required to write excellent pieces, and the tools to make that knowledge entertaining and accessible. It means—especially in the context of freelancers, who are incredibly vulnerable to abuses of power—protecting individuals from harassment by others within the community. It means establishing a stable and reliable platform for those harassed, assaulted, or otherwise harmed by others to raise their voices with the knowledge that they will be believed, and the matter fully and compassionately investigated; a platform that can, where necessary, criticise and sanction the leaders of the community. It means people in the community knowing—again, and with confidence—that success or failure in their field is judged on the quality of the work, not the unprofessional standards of the gatekeepers.

All of those things are necessary. Remove one, and you damage the edifice on which people’s livelihood’s rest.

It will take time, but individuals are already moving to offer suggestions on what comes next, such as Maryn McKenna’s thoughtful analysis of where ScienceOnline should go from here. Understanding the different elements of professionalism in science writing allows people looking for solutions to ask “does this allow us—the community—to better serve the needs of our members in fulfilling our professional mandate?” ScienceOnline, to their credit, already has a mission that loosely tracks this professional model. I think that an enduring legacy for ScienceOnline would be to build the safety of its members not simply as a separate policy, but as a central feature of this mission.

To finish, I want to acknowledge that Monica Byrne, Hannah Waters, and Kathleen Raven have done something truly heroic by sharing their stories, and bringing to light this unconscionable abuse of power. I’ve spent my words here on the institution that is science writing, but I want to make clear that any critique of institutions should begin with the recognition of personal stories. At great risk to themselves these remarkable people have exposed corruption within their community. That’s true professionalism.

Who is responsible for all those cranks?

So a running theme in my work is responsibility for communication—how we understand our obligations to communicate truthfully, accurately, and with the ends we seek. So it was with great interest that I watched Suzanne E. Franks (TSZuska), Kelly Hills (Rocza), and Janet D. Stemwedel (Docfreeride) hold a conversation in the wake of Virgina Heffernan’s “Why I’m a creationist.” I’m not going to spoil the amazing train-wreck that is Heffernan’s post; if you’d like to see some of the fireworks that ensued you should head across to watch the fallout as Thomas Levenson and Carl Zimmer got stuck in.

The conversation between Franks, Hills, and Stemwedel is interesting, I think, for the way they navigated Heffernan’s alleged status as a Foucauldian, and how this linked up with issues with postmodernism more generally. Postmodernism is an area I’ll leave to the experts above; what interests me is how we connect the bad apples, the cranks, and the downright malevolent with broader criticism of a field.

I’ll work with my own field, the analytic tradition of moral and political philosophy. It has all sorts of bad apples and problematic characters: we seem destined (the horror) to include people like Robert Nozick. Now I am about as anti-Nozick as it gets, but I can’t deny that he was an American political philosopher from the same cohort as John Rawls; someone whose theories are not so important to me as is one of his students, whom I count as a friend and mentor. I feel I have to kind of have to grind my teeth and allow Nozick as part of the “family,” albeit not a part I much like.

But do I have to own responsibility for every obnoxious kid that reads Nozick? And takes him seriously? Yikes. That sounds terrible. Yet perhaps in some cases I do—if there is a professor out there teaching that Anarchy, State and Utopia is God’s Divine Word, I’m probably stuck with their students as a product of my field’s “sins.” 

I will hang my head cop the criticism that analytic philosophy has produced a frightening amount of first-rate assholes in its time.

Will I, however, take responsibility for right-wing libertarians and their fascination with Adam Smith’s “invisible hand?” I’m hesitant—primarily because most libertarians just casually gloss over The Theory of Moral Sentiments and run straight for The Wealth of Nations, and that just seems like intellectual laziness and cherry-picking at its worst. I can be held for bad writing; bad theories improperly rebuffed; or teaching that is antiquated, bigoted, or just wrongheaded. But it is much harder, I think, to say that I am responsible because a whole group of people found it inconvenient to read the other half of a body of work.

These, I think demonstrate a (non-exhaustive) series of relations we might have with certain elements of our intellectual movements and traditions, who use common language to achieve results that don’t sit right with us.

We could, of course, just reject that someone is properly part of our practice. This is, I’ve no doubt, as much political as it is a question of whether someone’s practice possesses the necessary or sufficient conditions to be classed as part of one’s group—we want to be able to say that some practices that take our name are simply Doing It Wrong. Yet we don’t want to allow just anyone to get away with that at any time–it removes a powerful and often legitimate critique of being able to point at an element of some set of people and go “they are a problem, and they are indicative of some larger issue.”

Another way we could approach this is to say “well that’s an instance of Bad X, but Bad X is not the same as X being bad.” That’s an important part of managing a field’s boundaries. The problem, of course, comes in identifying at what point instances of Bad X become signs of X being bad. My co-supervisor, Seumas Miller, has done work on institutional corruption that I think would be interesting to apply here, but that’s a paper in itself.

Finally, we could acknowledge and go “that’s not just an instance of bad X, but a sign that there is something wrong with the way we practice or communicate X. We should fix that.” I’ve talked before about the need for responsibility in science writing, and that applies to my field as much as any other.

Which of these is Heffernan? Is she Just Doing It Wrong, a bad po-mo (but not evidence of po-mo being bad), or a sign of something more problematic? I don’t know. Franks, Hills, and Stemwedel, I think, cover all three possibilities. Maybe Heffernan is a combination, a hybrid, or something altogether than these three distinctions I’ve made.

I’ve thrown the original chat up in Storify for anyone who wants to see the source I’m working from. It has been edited inexpertly, but I hope not leaving anything important out (though I did leave out an entertaining discussion on Foucault and Emo-pop that you’ll have to track down on Twitter). You can find it here.