Tag Archives: ethics

Treating like cases alike: bioethics edition

Crash course in philosophy—treat like cases alike.[1]

That means that if you think a much-hyped article on the health benefits of chocolate is unethical, your reasons for this conclusion ought to apply consistently across other similar cases. Put another way, if you think John Bohannon acted unethically in publishing a study that claimed to show a relationship between the success of weight-loss diets and chocolate consumption, fooling the science journalism world; and if your reasons for thinking this bear on relevantly similar cases, you ought to arrive at similar conclusions about the ethics of those cases as you do about Bohannon.

1) For example, you might think—as I do—that whether or not the study was intended as a hoax, some human subjects protection ought to be enforced. You might worry that there’s no mention of research ethics review or informed consent in the journal article that caused the furor, or any of Bohannon’s writings since. Kelly asked Bohannon on io9 what kind of approval he got for the study, if any. I’ll update if anything comes of that.

But if you do have this concern, then such reasoning should lead you to treat this study conducted by Facebook with similar suspicion. We know that no IRB approved the protocol for that study. We know that people didn’t give meaningful consent.[2]

2) Say you are worried about the consequences of a false study being reported widely by the media. It is true that when studies are overhyped, or falsely reported, all kinds of harm can result.[3] One could easily imagine vulnerable people deceived by the hype.

But if you think that, consider that in 2013, 23andMe was marketing a set of diagnostics for the genetic markers of disease in the absence of any demonstrated analytic or clinical validity. They were selling test kits with no proof that the tests could be interpreted in a meaningful way.

I think that the impact of the latter is actually much greater than the former. One is a journalist with a penchant and reputation for playing tricks on scientific journals. The other is a billion-dollar, Google “we eat DARPA projects for breakfast” backed, industry leader.

But if Bohannon’s actions do meet your threshold for what constitutes an unacceptable risk to others, you are committed to concerns about any and all studies that could harm distant others that reach that threshold.


If you think Bohannon was out of line, that Facebook was unethical in avoiding oversight, and it was inappropriate for 23andMe to flaunt FDA regulations and market their genetic test kits without demonstrating their validity, then you are consistent.

If you don’t, that’s not necessarily a problem. The burden of proof, though, is now on you to show either a) that you hold one of the above reasons, but the cases are necessarily different; or b) that the reasons you think Bohannon is out of line are sufficiently unique that they don’t apply in the other cases.

Personally? I think the underlying motivation behind a lot of the anger I see online is that scientists, and science communicators, don’t like being made out as fools. That’s understandable. But if that’s the reason, then I don’t think Bohannon is at fault. Journals have an obligation to thoroughly review articles, and Bohannon is notorious for demonstrating just how many predatory journals there are in the scientific landscape today. Journalists have a responsibility to do their jobs and critically examine the work on which they report.

Treat like cases alike, and use this moment for what it should be: a chance to reflect on our reasons for making moral judgement, and our commitment to promoting ethical science.


  1. Like anything in philosophy, there are subtleties to this ↩
  2. Don’t you even try to tell me that a EULA is adequate consent for scientific research. Yes, you. You know who you are. I’m watching you.  ↩
  3. Fun fact: that was the subject of my first ever article!  ↩
Advertisements

#shirtstorm: men hurting science.

Or “why the signs and symbols that create the Leaky Pipeline are unethical, and compromise the integrity of science itself.” This post because Katie Hinde asked, and this is just as important as the other writing I’m doing.

If you float through my sector of the Internet, you’ve probably heard or seen something about #shirtstorm: the clown in the Rosetta project—which just landed a robot on a comet—who decided it’d be the height of taste to be interviewed wearing this:

Yes, that’s Rosetta scientist Matt Taylor in a shirt depicting a range of mostly-naked women. The sexist, completely unprofessional character of this fashion choice is pretty obvious. Taylor also doubled down by saying of the mission “she is sexy, but I never said she is easy.”

Way to represent your field, mate.

Better people than me have talked about why the shirt is sexist, why it marginalizes women, why the response is horrid, and why the shirt—as a sign—is bad news. I’ve also seen a lot of defenders of Taylor responding in ways that can be boiled down to “woah, man [because really, it is always “man”], I just came here for the science.”

But the pointy end of that is that this does hurt science. Taylor, and his defenders, are hurting science—the knowledge base—with their actions.

As a set of claims about the world, science is pretty fabulous for the way that claims can be subjected to the scrutiny of testing, replication, and review. Science advances because cross-checking new findings is a function of the institution of science. It’s a system that has accomplished also sorts of amazing things—including putting a robot on a comet.

Image courtesy Randall Munroe under the Creative Commons Attribution-NonCommercial 2.5 License

Yet science advances only as far, and as fast, as its membership. This has actually been a problem for science all the way back to before it was routinely called “science,” when STEM was more or less just “natural philosophy” (“you’re welcome”—Philosophers). When American science—particularly American physics—was getting started in the 19th century, it went through an awful lot of growing pains trying to institutionalize and make sure the technically sweetest ideas made it to the top of the pile. It is the reason the American university research system exists, the American Association for the Advancement of Science exists, and why the American PhD system evolved the way it did (and yes, at the time it was basically about competing with Europe).

Every country has their own history, but the message is clear—you only get science to progress by getting people to ask the right questions, answer those questions, and then subject those questions to a robust critique.

The problem is that without a widely diverse group of practitioners, you aren’t going to get the best set of questions, or the best set of critiques. And asking questions and framing critiques is highly dependent on the context and character of the questioners.

The history of science abounds stories in which the person is a key part of asking the question, even as the theory lives on when they die (or move on to another question). Lise Meitner in the snow, elucidating the liquid drop model of atomic fusion. Léo Szilard crossing the street, enlightened by the progression of traffic lights into the thought of the nuclear chain reaction. Darwin and his finches. Goodall and her chimpanzees. Bose and his famous lecture that led him to his theories of quantum mechanics.

The point is that the ideas of great scientists, and the methods they use, depend on the person. Where they came from; how they experience the world. In order to find the best science, we need to start with the most robust starting sample of scientists we can.

When people are marginalized out of science—women, people of color, LGBQTI people, people with disabilities, people of other religions—the sample size decreases. Possible new perspectives and research projects vanish from science, because a bunch of straight white dudes just can’t think of it. That’s bad science. That’s bad society.

This has real, concrete implications for science and medicine. Susan Dodds, a philosopher and bioethicist at the University of Tasmania, has a wonderful paper called “Inclusion and exclusion in women’s access to health and medicine” (You can find the paper here). Dodds notes that the way our institutions are set up, access to healthcare and medical research is limited by the role of gender. Women’s health issues—again, in care and research—tend to be sidelined unless it has something to do with reproduction. This is to the point that research ostensibly designed to be sensitive to sex and gender often asks questions and uses methodology that limit the validity of experimental results to women, individually or as a group. The scientific community quite literally can’t answer questions properly for lack of diversity, and asks questions badly from an excess of sexism.

You can imagine how that translates across fields, and between different groups that STEM has traditionally marginalized.

So when you defend Matt Taylor, allow people to threaten Rose Eveleth, and tolerate the vitriol that goes on against women—in STEM and out of STEM—you limit the kinds of questions that can be asked of science, and the ways we have of answering those questions.

You corrupt science. You maim it. You warp it.

I realize this shouldn’t be a deciding factor—Matt Taylor’s actions are blameworthy even if he wasn’t engaged in a practice that contributes to the maiming of science. But for those who can’t be convinced by that, who “just want to be about the science,” take a good, long hard look at yourself.  If the litany of women scientists who never got credit for their efforts wasn’t bad enough, there are generations of women scientists—Curies, Meitners, Lovelaces, and Bourkes—that never were. We’re all poorer for that.

So next time you want to be “just about the science,” tell Matt Taylor to stick to the black polo.

National Security and Bioethics: A Reading List

Forty-two years ago, in July 1972, the Tuskegee syphilis study was reported in the Washington Star and New York Times. Yesterday, a twitter chat hosted by TheDarkSci/National Science and Technology News Service, featuring Professors Ruha Benjamin and Alfiee M. Breland-Noble, discussed bioethics and lingering concerns about medical mistrust in the African-American Community. It was an excellent event, and you can read back through it here.[1]

Late in the chat, Marissa Evans expressed a desire to know some more about bioethics and bioterror, and I offered to post some links to engaging books on the topic.

The big problem is that there aren’t that many books that specifically deal with bioterrorism and bioethics. There are a lot of amazing books in peace studies, political science, international relations, history, and sociology on bioterorism. Insofar as these fields intersect with—and do—bioethics, they are excellent things to read. But a bioethics-specific, bioterror-centric book is a lot rarer.

As such, the readings provided are those that ground the reader in issues that are important to understanding bioterrorism from a bioethical perspective. These include ethical issues involving national security and scientific research, dangerous experiments with vulnerable populations, and the ethics of defending against the threat of bioterror.

The Plutonium Files: America’s Secret Medical Experiments in the Cold War. If you read one book on the way that national security, science, and vulnerable people do not mix, read Eileen Welsome’s account of the so-called “Human Radiation Experiments.” Read about dozens of experiments pursued on African Americans, pregnant women, children with disabilities, and more, in the name of understanding the biological properties of plutonium—the fuel behind atomic bombs. All done behind the great screen of the the Atomic Energy Act, because of plutonium’s status as the key to atomic weapons.

Undue Risk: Secret State Experiments on Humans. A book by my current boss, Jonathan D. Moreno, that covers some of the pivotal moments in state experimentation on human beings. The majority of the cases Moreno covers are those pursued in the interests of national security. Particularly in the context of the Cold War, there was a perceived urgent need to marshall basic science in aid of national security. What happened behind the curtain of classification in the name of that security, however, was grim.

Biohazard: The Chilling True Story of the Largest Covert Biological Weapons Program in the World–Told from Inside by the Man Who Ran It. Ken Alibek is hardly what you’d call a reliable narrator; then again, I can’t imagine what being part of a crack Soviet bioweaponeer unit would do to a person.[2] Nonetheless, it is a foundational read in the immense bioweapons enterprise that was built from the 1970s till the end of the Cold War.

Innovation, Dual Use, and Security: Managing the Risks of Emerging Biological and Chemical Technologies The late Jonathan B. Tucker released this edited volume in 2012; while the debate about dual-use in the life sciences has progressed since then, it is still one of the most thoughtful pieces on the topic of bioterrorism, biological warfare, and the governance of the life sciences out there. It is also accessible in a way that policy documents tend not to be. That’s significant, as the book is a policy document: it started out as a report for the Defense Threat Reducation Agency.

This list could be very long, but if I were to pick out a selection of books that I consider essential to my work, these would be among the top of the list.

As an addendum, an argument emerged on the back of the NSTNS chat about whether science is “good.” That’s a huge topic, but is really important for anyone interested in Science, Technology, Engineering and Mathematics and their intersection with politics and power. As I stated yesterday on Twitter, however, understanding whether “science is good” requires understanding what the “science” bit means. That’s not altogether straightforward.

Giving a recommendation on that issue involves stepping into a large and relatively bitter professional battle. Nonetheless, my first recommendation is always Phillip Kitcher’s Science, Truth, and Democracy. Kitcher carefully constructs a model of where agents interact with scientific methods and tools, and so identifies how we should make ethical judgements about scientific research. I don’t think he gets everything right, but that’s kind of a given in philosophy.

So, thousands of pages of reading. You’re welcome, Internet. There will be a test on Monday.


  1. I’ll update later with a link to a Storify that I believe is currently being built around the event.  ↩
  2. Well, I can. It is called “Ken Alibek.”  ↩

That Facebook Study: Update

UPDATE 30 June 2014, 8:00pm ET: Since posting this, Cornell has updated their press release to state that the Army did not fund the Facebook study. Moreover, Cornell has released a statement clarifying that their IRB

concluded that [the authors from Cornell were] not directly engaged in human research and that no review by the Cornell Human Research Protection Program was required.

Where this leaves the study, I’m not sure. But clearly something is amiss: we’re still sans ethical oversight, but now with added misinformation.

 ***

So there’s a lot of news flying around at the moment about the study “Experimental evidence of massive-scale emotional contagion through social networks,” also known as That Facebook Study. Questions are being asked about the ethics of the study; while I want to post a bit more on that issue later, a couple of facts for those following along.

Chris Levesque pointed me to a Cornell University press release noting that the study in question received funding from the US Army Research Office. That means the study did receive federal funding; receipt of federal funding comes with a requirement of ethics oversight, and compliance with the Common Rule. It is also worth noting that the US Army Research Office has their own guidelines for research involving human subjects:

Research using human subjects may not begin until  the U.S. Army Surgeon General’s Human Subjects Research Review  Board (HSRRB) approves the protocol [Article 13, Agency Specific Requirements]

and

Unless otherwise provided for in this grant, the recipient is expressly forbidden to use or subcontract or subgrant for the use of human subjects in any manner whatsoever [Article 30, “General Terms and Conditions for Grant Awards to For-Profit Organizations“]

***

I’ve also been in touch with Susan Fiske, the editor of the study. Apparently, the Institutional Review Board (IRB) that approved the work is Cornell’s IRB. That IRB found the study to be ethical:

on the grounds that Facebook filters user news feeds all the time, per the user agreement. Thus, it fits everyday experiences for users, even if they do not often consider the nature of Facebook’s systematic interventions. The Cornell IRB considered it a pre-existing dataset because [Facebook] continually creates these interventions, as allowed by the user agreement (Personal Communication, Fiske, 2014).*

So, there’s some clarification.

Still, I can’t buy the Cornell IRB’s justification, at least on Fiske’s recounting. Manipulating a user’s timeline with the express purpose of changing the user’s mental state is, to me, a far cry from business as usual. Moreover, I’m really hesitant to call an updating Facebook feed a “pre-existing dataset.” Finally, better people than I have talked about the lack of justification the Facebook user agreement provides.

This information, I hope, clarifies a couple of outstanding issues in the debate so far. Personally, I’d still like to see a lot more information about the kind of oversight this study received, and more details on the Cornell IRB’s analysis.

* Professor Fiske gave her consent to be quoted in this post.

How safe is a safe lab? Fouchier and H5N1, again

A quick update on gain-of-function; I’m between papers (one submitted, one back from a coauthor with edits), and gain-of-function is back in the news.

Ron Fouchier and Yoshihiro Kawaoka have responded to a new paper on gain-of-function, which appeared in PLoS Medicine last week. The paper, by Marc Lipsitch and Alison P. Galvani, takes three popular conceptions of GOF research to task: 1) that it is safe; 2) that it is helpful in the development of vaccines; and 3) that it is the best option we have for bettering our understanding of flu viruses. I’m going to concentrate, here, on Fouchier’s reply to concerns about safety when performing gain-of-function research.

For context, Lipsitch and Galvani argue that

a moderate research program of ten laboratories at US BSL3 standards for a decade would run a nearly 20% risk of resulting in at least one laboratory-acquired infection, which, in turn, may initiate a chain of transmission. The probability that a laboratory-acquired influenza infection would lead to extensive spread has been estimated to be at least 10%.

A result that Fouchier calls  “far fetched.” His reason?

The data cited by Lipsitch and Galvani include only 11 laboratory-acquired infections in the entire USA in 7 years of research, none of which involved viruses, none of which resulted in fatalities, and—most importantly—none of which resulted in secondary cases.

But Fouchier’s objection misses a couple of key points. The data Lipsitch and Galvani use comes from a 2012 paper in Applied Biosafety, which looked that the reported number of thefts, losses, or releases (TLR) of pathogens in American labs from 2004–2010. What they found was that TLRs have been increasing steadily, from 57 in 2007, to 269 in 2010. One reason given by Lipsitch and Galvani is that in 2008, significant investment was made in outreach and education about the need for accurate reporting about TLRs.

The other likely culprit? The proliferation of BSL–3 and BSL–4 laboratories.

Between 2004 and 2010, the number of BSL–3 laboratories in the USA—that we know of—more than tripled, from 415 to 1,495. This is likely an underestimate, however, because there is no requirement to track the number of BSL–3 labs that exist. The number of BSL–4 labs has also increased to 24 worldwide; 6 of these are in the USA. When you get an explosion of labs, you are likely to get a corresponding increase in laboratory accidents. That there have only been eleven laboratory acquired infections reported from hundreds of releases should cause us relief; but it doesn’t show that gain-of-function research is safe.

The problem, unacknowledged by Fouchier, is one of scale—as the number of labs increases, the number of experiments with dangerous pathogens (including H5N1) is likely to increase. Laboratory accidents are not a matter of if, but a matter of when. Significantly increasing the number of labs adds significantly to that “when.” And, in the words of Lipsitch and Galvani,

Such probabilities cannot be ignored when multiplied by the potential devastation of an influenza pandemic, even if the resulting strain were substantially attenuated from the observed virulence of highly pathogenic influenza A/H5N1.

More problematic is that Fouchier fails to note that laboratory containment failures have killed in the last decade—just not in the USA. In 2004, a researcher from the Chinese National Institute of Virology fell ill with SARS. She, while ill, infected both a nurse, who infected five others; and her mother, who died.

This represents, contrary to Fouchier’s claim, a laboratory containment failure that was a virus, did cause secondary (and tertiary) infections, and that killed. The study on which Lipsitch and Galvani base their estimate only covered the USA. A worldwide view is likely to be more concerning, not less. That’s why Lipsitch and Galvani refer to their estimate as “conservative,” and why for years there have been calls for heightened security when it comes to pandemic influenza studies.

Fouchier is right that scientific research has never caused a virus pandemic. But it’s a glaring logical error, to say nothing of hubris, to use that claim to dismiss the concern that it could happen.

There are other issues to be had with the responses of Fouchier and Kawaoka, but the overall message is clear: they continue to dismiss concerns that gain-of-function studies are risky, and provide little in the way of tangible benefits in the form of public health outcomes. I’m hoping for a more thoroughgoing reply from Lipsitch, who earlier this morning expressed some interest in doing so. This is an important debate, and needs to be kept open to a suitable range of voices.

On Science and Mindfulness

I’m a practitioner of Buddhism, not a scholar of Buddhism. As someone who writes about responsibility in writing, I want to acknowledge first and foremost that I’m speaking from the position of a single practitioner who comes from a small branch of a particular family in a particular school of Buddhism.

Before writing this, I sat with my left foot atop my right thigh in the “half-lotus” position. I sat for forty minutes, breathing in and out through my nose, my concentration set—as much as my will would allow—on the point three finger-widths below my navel: a point the Chinese refer to as dan tien. I meditated as a practitioner of the Cao Dong family of Ch’an Buddhism; with an eye to cultivating mindfulness and, eventually, to finding my own enlightenment.

I didn’t come to mindfulness through Buddhism, however. In fact, it was the other way around, and via a route at which many other Buddhists would probably raise an eyebrow. I started cultivating mindfulness through martial arts. Among other things, Ch’an is associated with the Shaolin Temple, and can count the martial arts as both a tool of, and an entry point into, the practice of Buddhism. It is common in my tradition that the boxing and mindfulness come first, and the Buddhism comes later.

So when Jan Henderson kindly passed on an editorial decrying science’s Not So Sudden Interest in mindfulness, I was intrigued. Richard Horton, editor-in-chief of The Lancet, responded to Kate Pickert’sThe Mindfulness Revolution,” (also here) saying:

“It may be a mistake to dissect [Buddhism], discarding Buddhism’s broader ideas on wisdom and morality (even its ultimate purpose), and putting only a single part to use—even if it is a very good use.”

That is, the surge of interest in the cultivation of mindfulness by Silicon Valley, the NIH, and even the Defense Advanced Research Projects Agency, is somehow undermining the greater purpose of mindfulness as an element of Buddhism, and that this is a Bad Thing.

As a Buddhist, I have to respectfully disagree.

Horton’s article does raise serious questions about the current interest in mindfulness, but whether or not it should remain a part of Buddhism isn’t really one of them. For one, even if the term “mindfulness” is by and large associated with Buddhism, the types of things to which we refer when talking about mindfulness are not exclusively Buddhist. Being more aware of your surroundings, your own body, and those around you can be found in a great number of the religions of the world, and a range of comprehensive practices beyond the religious.

Moreover, as a Buddhist I’m committed to the idea that human beings shouldn’t suffer as much as they do. If mindfulness offers a way to alleviate some of that suffering then that is a great, great thing. I’d gladly see the same well-being I’ve experienced through my practice improve the lives of others. Mindfulness is a part of Buddhism, but it is also valuable in its own right.[1]

It is imperative, however, to subject practical methods to empirical scrutiny. In my tradition, it isn’t enough to just do—or not do—because someone else is doing or not doing it. You must subject practice, rigorously and straightforwardly, to your own experience.

People can critically evaluate a practice on their own, but science also provides us a powerful tool in doing this. Mindfulness is vulnerable to being co-opted by a range of more or less spurious characters and self-styled gurus. If science can help confirm what works and doesn’t work, that’s a good thing. At my most optimistic, nothing would make me happier than the demonstration, in a systematic fashion, that mindfulness training is effective, safe, and cost-effective, and thus worthy of consideration by, say, the Medical Services Advisory Committee as a legitimate practice worthy of subsidization by the Australian Government.

Of course, science’s involvement with mindfulness could be extremely problematic. It would be unfitting for scientists to claim that they have “discovered” mindfulness. As Ben Lillie has noted in the case of science communication, science doesn’t do anyone any favors when it “discovers” millenia-old practices that clearly work. Science can’t discover mindfulness—that’s been done. It can, however, help us explain certain aspects of mindfulness in new ways, and give us a particular perspective from which to examine and critique mindfulness.

There is something important in Horton’s editorial that I’m sympathetic to as a Buddhist, but I don’t think you need to be a Buddhist to have this concern. Pickert’s article is full of stories about people using mindfulness, but these stories are often about the rich and privileged. In these stories, mindfulness is often a tool they use so they can go back and do even more work. On my reading, the stories of “X uses mindfulness to be a better innovator/entrepreneur/thought leader/whatever” lay in tension with the latter section of her article, on the possibility that mindfulness could just be a good thing, even if it doesn’t help you make that next killer app.

Make no mistake, I think that mindfulness in one’s professional life is a good thing; I’m a firm believer that moralizing another’s mindfulness is deeply problematic. Nonetheless, I do worry that this new surge in mindfulness will remain primarily a tool of the privileged, even as I say that I find it exciting that it is being used at all. As a Buddhist, and also someone who believes in robust and equitable public health including mental health, I think we can do much better than that.

It is from there that I think our critique should begin. Mindfulness without Buddhism—as far as I’m concerned, mindfulness within Buddhism—is a tool for navigating the world. And like most tools, there are better or worse uses. We should be aiming, then, to make sure that mindfulness training and practice, scientifically informed and pursued, is distributed fairly, and approached carefully.

We should be mindful about how we practice, conceive of, and wield mindfulness.


  1. Though I’d say he needs to be a little clearer on that connection. The role, place, and meaning of mindfulness isn’t the same for every school in Buddhism, and one of the issues I had with Horton’s account is that it gave a remarkably monolithic view of Buddhism. In my experience, nothing is further from the truth. I won’t pursue that further here, however.  ↩

The corrupt, and the corrupted

Corruption is something that’s hard to describe, but we usually “know it when we see it.” Justice  Stewart’s words were originally intended for pornography, but corruption suffers from an indeterminacy of its own. Even when we agree that corruption is happening, it is sometimes difficult to know what it is that is corrupt, and what is doing the corrupting. This last bit—what does the work of corruption—is often a matter of debate.

That’s the subject of Lawrence Lessig’s Daily Beast article today, which makes use of an exchange between Senators John McCain (R-AZ) and Mitch McConnell (R-KY). McCain, in 1999, claimed that the roll of campaign contributions of the size experienced in the USA “corrupted our political ideals;” an idea that McConnell objected to by demanding to know who was corrupt. Lessig argues, in his article, that even in the absence of a corrupt person, a system may still be corrupt. Lessig argues that McCain’s claim

…is not about bad people doing bad things. The complaint is against a bad system, which drives good people to behave in ways that defeat the objectives of the system as a whole.

It is an interesting notion, and an important one—that systems can be corrupt, even when people aren’t. Institutional arrangements, when corrupt, can drive people in directions that are unfavorable.

I think, however, that Lessig is too quick in what he attributes to McCain. Though he later reneged, denying “that any individual or person is guilty of corruption in a specific way,” McCain claims that the campaign contribution system at present is a corrupting influence—one that corrupts everyone:

In truth, we are all shortchanged by soft money, liberal and conservative alike. All of our ideals are sacrificed. We are all corrupted. I know that is a harsh judgment. But it is, I am sorry to say, a fair one. And even if our own consciences were to allow us to hide from it, the people we are privileged to serve will not.

The importance of this comment can’t be understated. McCain isn’t necessarily saying that anyone is corrupt. He is however, saying that he and others are corrupted, and that they’ve been compromised in some way [1]. I’ve talked about being compromised elsewhere, but here I want to pull apart this notion of being corrupted—compared to being corrupt—a bit more.

The whole point Lessig wants to make is that institutional arrangements—for example, the effect of campaign contributions on the political process—change behavior. But in talking about the ways that corrupt institutions pervert individual actions, it is still important to talk in the language of individuals. The effect that McConnell wanted to (wrongly) identify as unidirectional—corrupt people cause corrupt institutions—flows in the other direction as well. Corrupt institutions do bear on individuals, as Lessig claims, but in doing so it can leave them corrupted, or even corrupt.

The difference between being corrupted, and being corrupt, is—if anything—very fine indeed. It seems plausible, however, to say that someone has been corrupted, or has had their actions corrupted, even if we don’t want to go so far as to say they are corrupt. That is, they’ve been compromised, but they do so under duress, and with little other options available.

This, of course, is a fine line to tread. Means don’t justify ends, but if your means are so limited as to make a problematic means the only way to a desirable end, then we make the best with what we have. Generally, we probably want to elect representatives who intend to bring about good outcomes, and promote reforms. These representatives may be compromised in doing that—no-one is perfect—but it seems that sometimes the price of the right person not stepping into that arena is too high. 

So maybe, in principle, we can have a corrupt system, with corrupted, compromised members, but no one genuinely corrupt. Of course, that seems somewhat optimistic: if there are people in congress actively opposing reform designed to redress a corrupt and corrupting institution, then they are most likely corrupt. Moreover, those who exploit a particular corrupt institutional arrangement for their own gain are certainly corrupt.

I don’t mean corrupt in the sense of unlawful activity, and I certainly believe that this is what McCain was attempting to cover for when denying that anyone was guilty of corruption. Representatives, Senators, and presidential nominees can accept and use campaign contributions, and do so in a lawful manner. Lobbyists aren’t breaking laws. But they, at times, doing the wrong thing.

We should also keep in mind that those who become compromised are still doing something wrong, even if there is no or little other alternative. And insofar as they are overly tardy in assisting in rectifying the system that corrupts them, they should be held to account. Hopefully, enough of these corrupted people will help in the reforms Lessig is championing, such as the American Anti-Corruption Act (ACA).

What legislation like the ACA hopes to accomplish is the make the unethical, unlawful. That’s a great thing. But it is also important to call out the individually corrupt, and recognize the corrupted, when we see them. Institutional reform is vital, but regulation and law rarely make corruption go away by themselves—corruption often occurs within the scope of lawful activity, and the genius of motivated people out to pervert something good can be nothing more than breathtaking. Preventing corruption does require regulation and legislation, but also requires vigilance and loud voices. We should make sure that we pay attention to the individual, as well as the institutional; to the corrupted, as well as the corrupt.

[1] McCain uses the term “corrupt” in three different ways in that speech. In his first usage, he refers to the Government as corrupt. In his second and fourth, he refers to the presence of campaign contributions being a corrupting factor. In his third—used in the quote above—he refers to the representatives as corrupted.