« Sponsored Announcement: UPenn Law Invites Applications for Sharswood Fellowships | Main | Religious Pluralism and "The Office" »

Friday, October 24, 2008

Culture and Inscrutable Science: An Analytical Method for Preliminary Injunctions in Extreme Cases

Columbia_foam_strike Columbia re-entry Is it plausible that scientists and engineers could overlook fatal flaws in a multi-billion-dollar project? Top: Foam from the external tank strikes the leading edge of Columbia's left wing. Bottom: The now iconic image of Columbia breaking up over Texas.

It’s one of the most interesting and daunting judicial controversies to come around in a long time: A few very worried individuals claim that a brand new, largest-of-its-kind particle accelerator under Switzerland and France, CERN’s Large Hadron Collider, could create a black hole that is capable of reducing the Earth and everything on it to an infinitesimal lightless speck.

So here’s the question of the moment: Is it plausible that a group of extremely smart, highly trained, non-sociopathic scientists and engineers could overlook fatal flaws in a multi-billion-dollar project and thus cause a catastrophe?

Part 3 of
Black Holes
& the Law

It is plausible. In fact, it has happened multiple times. A recent example is the space shuttle Columbia disaster.

Previously I blogged about the analytical problems of considering a preliminary injunction against CERN. In this post, I’m going to attempt to provide an analytical solution. Here’s a recap of the dilemma: The science involved in this case is so complex, it would take years of physics training for a judge to make an independent evaluation of the arguments on either side. The old fallback, expert testimony, is problematic here, since all the experts are interested parties, and since our legal tool for sifting out unreliable expert opinion, the Daubert framework, collapses into analytical nonsense when faced with extreme facts such as these.

If the judiciary surrenders to these difficulties and refuses to involve itself in the dispute, the judiciary is then rendering consensus judgments within scientific communities effectively injudicable – even where those judgments are disputed, and even where the alleged harm is destruction of the Earth. That seems unacceptable. Yet if the judiciary plows ahead and issues an injunction in such cases – despite not having a principled way of evaluating the merits of the plaintiffs’ arguments – the courts are then transformed into a marionette – manipulable by frivolous objectors into halting any scientific undertaking that is sufficiently complicated so as to be opaque to the layperson. That seems unacceptable as well. Either way, we lose the benefits of fair judicial review.

Is there any way out?

I believe there is. And the Columbia accident points the way. The Columbia Accident Investigation Board concluded that several aspects of the culture of NASA’s human spaceflight program led to the disaster, including, among other things, political considerations and “stifled professional differences of opinion.”

While courts are not well equipped to evaluate theoretical science, they certainly are adequate to the task to investigating social dynamics, psychological factors, political influences, and organizational cultures. In evaluating a preliminary injunction request regarding the Large Hadron Collider, a court should scrutinize the culture of CERN and the particle-physics community, as well the political, social, and psychological context in which their decisions are made. Having done so, the court should then determine, with reference to those gathered facts, whether “serious questions” exist, and, thus, whether the case for a preliminary injunction has been made.

An honest appraisal of the situation reveals that there are many apparently plausible reasons why the culture at CERN and within the particle-physics community could lead to flawed risk analysis. I will list several:

To begin with, it seems highly plausible that particle physicists might fear serious reprisals and negative repercussions for their careers if they were to speak out about perceived dangers of the LHC. Denial of tenure, unaccepted manuscripts, and ostracism by peers are among the penalties an academic in such a situation might plausibly face. Such an apprehension would appear to be all the more acute because the LHC is the crown jewel of particle-physics experimentation. It dwarfs all predecessors in size and power, and represents a leap forward that could radically advance fundamental theory, possibly answering some of the most basic questions about our universe. To say that the LHC is important to the particle-physics community seems to be an understatement.

Further, in mulling over whether to speak out, particle physicists with private doubts might well resign themselves to a fatalistic assessment. They might plausibly figure that they, as individuals, are powerless to overcome the momentum of a multinational multi-billion-dollar project. If that is their appraisal, then such individuals have nothing to gain, but much to lose, by making a public objection. Consider the possible outcomes: If a scientist speaks out and nothing bad happens, the scientist is a laughingstock. If a scientist speaks out and disaster does come to pass, professional vindication will be fleeting and bittersweet. If a scientist keeps mum or even extols the safety of the project, in a disaster scenario, embarrassment will be short-lived.

But let's suppose particle physicists with private doubts reach the opposite conclusion about the likely impact of their public dissent. Suppose a private doubter predicts that his or her voice could be the tipping point that leads to widespread public concern and a permanent shutdown of the LHC. In such a case, whether the objecting scientist is right or wrong, he or she can anticipate being blamed for ruining the most exciting opportunity for advancing scientific understanding in this generation. And there’s no hope of vindication in such an event – naysayers cannot be proved right if the experiments are never run.

The math-oriented are often fond of using matrices to elucidate decision-making. A physicist creating such a matrix, using the logic detailed above, would be faced with a series of boxes in which all outcomes are quite bad, except one: to be a supporter of the LHC in the event that it turns out to be a benign scientific triumph.

Additional pressure on scientists not to question the LHC may also come from the fact that the LHC appears increasingly to be the only game in town for particle physicists wanting to work at the leading edge of discovery. In fact, the world’s largest particle collider currently in operation, Fermilab’s Tevatron outside of Chicago, Illinois, is slated for shutdown in 2010, apparently in large part because the LHC will render it obsolete. Other particle accelerators planned for the future have had their funding suspended or cutoff.1

A psychological or sociological explanation for how particle physicists could reach a consensus on safety, despite the existence of real danger, is the phenomenon William H. Whyte, Jr. called “groupthink.” This process allows individuals to maintain a worry-free outlook that is not justified by the facts. In such a dynamic, the existence of group consensus causes individuals to forego or dismiss their own independent thinking. A circularity develops: Group consensus justifies individual confidence, and individual confidence justifies group consensus. The result is flawed decision-making. Groupthink has been offered as an explanation for both the Challenger and Columbia space-shuttle disasters.

Another set of concerns arises from the question of how political realities might have affected the decision-making environment at CERN. As a consortium run by 20 member states, it is plausible that politics plays a significant role in the CERN milieu.

Still another point of worry is the independence, or lack thereof, of the safety reviews that have been advanced as evidence that the LHC is safe. While an independent report was completed in 2003, more current documents said to confirm the safety of the LHC, which were issued in response to recent criticism, are the product of CERN itself, and are not independent.

Other factors are worthy of investigation as well. It may be, for instance, that the timeline of infrastructure construction and critical theorizing is such that LHC interests were thoroughly vested by the time potentially convincing theoretical work on safety concerns surfaced. That is, the late hour at which objections were made could well have prevented their open-minded consideration, regardless of merit. Some elements of the broad timeline of the LHC endeavor suggests this: The LHC was approved in 1994, and construction began in 1998. Construction was nearing completion in September 2007 when Otto Rössler released a paper explaining his new mathematical work, which, according to Rössler, demonstrates the LHC’s grave danger. Rainer Plaga’s article making a negative assessment of the risk at the LHC was published in August 2008, a month before operational testing began. At the point these papers were advanced, it is plausible that the LHC project had already reached the point where halting it was politically unthinkable.

Supporters of the LHC have argued that Dr. Plaga and Dr. Rossler are not career-dedicated particle physicists, and, therefore, their theoretical work should not be taken seriously. As discussed above, it seems plausible that the cultural environment in which particle physicists operate is such that public objection to the LHC is discouraged and stifled to the point where it is non-existent. Given such a state, we would expect public objection to come from outside the particle-physics community. Thus, rather than being a reason for discounting such theoretical work, the outsider nature of such work might be a reason to embrace it.

Even putting aside the social and cultural pressure on particle physicists to conform, it is a well-talked about phenomenon, famously advanced by Thomas S. Kuhn, that paradigm-shifting revolutions in scientific thought often come from individuals who are new to a field of study, and thus not entrenched in its conventional modes of thinking. (Jim Chen wrote about the virtues of juniority in the legal academy on MoneyLaw.) Thus we might expect that career particle physicists would be slow to accept paradigm-shifting theoretical work that undermines confidence in the safety of the LHC. As a corollary, the lack of particle-physics bona fides among LHC critics, especially ones who are serious and respected scientists, should not be relied upon as a way to dismiss their concerns.

There may be several other sociological, psychological, political, and cultural factors, in addition to those I’ve listed above, that would be relevant. The matter requires some deeper thought. Nonetheless, I believe this list of considerations shows that questions about the reliability of LHC safety assessments are not specious.

Let me be clear: I am not accusing CERN or the particle-physics community of incompetence or malfeasance. The above points are not set forth as factual contentions demonstrating the case for a preliminary injunction. Rather, I posit them as realistic possibilities that raise non-trivial questions, the answers to which could seriously undermine the consensus view that the LHC is safe.

I should also emphasize that I am not arguing in favor of a preliminary injunction against the LHC. Whether one should be granted is, to me, an open question. What I am arguing is that there is an analytical way for a court to reach a well-reasoned decision in cases such as this, even where the merits of the scientific controversy itself are opaque to judges lacking specialized scientific training, and where expert testimony is of dubious use in adjudicating the matter. In considering a preliminary injunction, the court should investigate the cultural, organizational, political, psychological, and sociological context in which safety determinations were made, and then ask whether the results of that inquiry raise serious questions on the merits. If serious questions are raised, and if the balance of hardships tips strongly in the plaintiffs’ favor (as it clearly does with a black hole destroying the Earth), then an injunction should issue.


1Mark Alpert, Future of Top U.S. Particle Physics Lab in Jeopardy
Congress's budget cut decelerates U.S. high-energy physics research
, Scientific American, January 22, 2008.

Posted by Eric E. Johnson on October 24, 2008 at 07:30 AM in Judicial Process | Permalink


TrackBack URL for this entry:

Listed below are links to weblogs that reference Culture and Inscrutable Science: An Analytical Method for Preliminary Injunctions in Extreme Cases :


While courts are not well equipped to evaluate theoretical science, they certainly are adequate to the task to investigating social dynamics, psychological factors, political influences, and organizational cultures.

It's funny, because I thought you were going to follow this statement up with an extensive list of reasons why the process for deciding whether the LHC is safe is a more reliable process than the one that decided the Columbia was safe to fly. The relevant groups are larger, less united by a single center of authority, within a community that offers many members near-total speech-related job protection, and so on and so forth. The high-energy theoretical physics community is currently in the midst of an enormous internal war between the string theorists and the anti-string-theorists. And so on, and on, and on. You could make your career offering a convincing argument for why the LHC is unsafe and backing it up.

In any event, it's going to be socially and rhetorically unacceptable for judges to make decisions that forthrightly admit they don't understand the science involved. Even if they actually make their decisions based on social factors, as you advocate, any opinion that admits the judge has simply given up on the science involved is going to be profoundly unpersuasive. I'd rather stick with the traditional rule. If a court doesn't have the expertise needed to pass judgment in a matter, it's up to the parties to educate it--and for the judges to educate themselves. I'm a strong believer in the proposition that judges can learn enough to deal intelligently with the scientific and technological that comes their way, and we oughtn't let them shirk that responsibility. (Among other things, I think that means that scientific training ought to be a bigger plus in judicial selection than it currently is.)

(Also, I'm curious, why are you using the two-part alternative standard for a preliminary injunction, rather than the traditional four-part standard?)

Posted by: James Grimmelmann | Oct 24, 2008 8:20:11 AM

While I only skimmed your previous posts on this subject, I read this one with care and have come to the conclusion that you raise provocative, important, and timely questions. However, I doubt that courts are in any way up to the task, for example, of "investigat[ing] the cultural, organizational, political, psychological, and sociological context in which safety determinations were made." Nonetheless, I can imagine the possibility that things might change on that score, however unlikely in the short term owing to various structural constraints (e.g., time, information-gathering) as well as the fact that the aforementioned context is in no way easily ascertained by even a sustained examination of the literature (some of the relevant literature is itself highly contested or in its infancy, etc.). Setting that aside, I think there are some instances where we can get a sense what is involved in challenging expertise consensus in a field from the "outside" as it were from a work like K.S. Shrader-Frechette's Risk and Rationality: Philosophical Foundations for Populist Reforms (1991). Shrader-Frechette points out, for instance, "that there are numerous difficulties of hazard estimation that do not admit of analytic, probabilisti resolution by experts:"

"Often the risk problem is not well enough understood to allow accurate predictions, as the use of techniques like fault-free analysis shows. Hence, experts are forced to rely on subjective or perceived risk probabilities, instead of on actual empirical accident frequencies established over a long period of trial. Even if assessors based their noition of *probability* on actual empirical accident *frequency,* this move would not always deliver their estimates of risk from the charge of being 'perceived.' Since there are reliable frequencies only for events that have had a long recorded history, use of historical accident/hazard data for new technologies likely results in underestimating the danger, because certain events may not have occurred between the inception of a new technology and the end of the period for which the risk information is compiled. Moreover, low accident frequency does not prove low accident probability. Only when the period of observing accident frequency approaches infinity would the two, frequency and probability, converge."

Shrader-Frechette reminds us of other reasons why "one cannot distinguish actual from perceived risk, in any wholly accurate way...." And, intriguingly, we learn that "some of the most important aspects of hazards, whether real or perceived, are *not amenable to quantification.* (What experts call) 'actual' risk estimates are based on the presupposition that risk is measured by probability and consequences, and that both can be quantified. Yet most laypeople would probably claim that what makes a thing most hazardous are factors that are not susceptible to quantification, factors such as a risk's being imposed without consent [think here for the reasons that account for the genesis of the field of bioethics], of being unknown [as in the case of nuclear waste disposal], or posing a threat to civil liberties [e.g., surveillance technologies]."

"Admittedly," Shrader-Frechette writes, "laypersons typically overestimate the severity of many technological hazards. However, even if it could be established that the public exaggerates the accident probabilities associated with some technology, such as liquefied natural gas (LNG), this fact alone would be a necessary, but not a sufficient, condition for establishing the thesis that laypersons erroneously overestimate risks. Their risk perceptions could be said to be erroneously high only if they were *based solely on* incorrect accident probabilities. These perceptions might also be based, for instance, on the potentially catastrophic consequences of an accident, or on the fact that such consequences are involuntarily imposed." Furthermore, "In their classic studies of the heuristic judgmental strategies that often lead to error in probability estimates, Kahneman, Tversky, and Oskamp concluded that experts, whenever they have merely statistical data, are just a prone as laypeople to judmental error regarding probabilities."

She concludes that we should reject the sharp distinction relied on by experts between risk and risk perception as "both politically dangerous and epistemologically confused."

I won't attempt to summarize her case for a "populist account of risk and rationality" so suffice to say I think it provides one part of a strategy designed to assist judges "even where the merits of the scientific controversy itself are opaque to judges lacking specialized scientific training, and where expert testimony is of dubious use in adjudicating the matter."

History of science, science and technology studies, as well as philosophy of science are all germane to examination of the strengths and limits of (natural or social) scientific consensus on a given subject: examples here might be David J. Buller's critique of the field of evolutionary psychology (which, however devastating and following Fodor, perhaps does not go far enough) in Adapting Minds: Evolutionary Psychology and the Persistent Quest for Human Nature (2005), or R.C. Lewontin's crtique of the field of genetics research in Biology as Ideology (1991), or critiques of neo-classical economics by the likes of Philip Mirowski or S.M. Amadae (or even the critique of the 'rhetoric' of economics by Deirdre McCloskey). All of this is especially poignant if not urgent in the age of "post-academic science" (John Ziman) in which economic and corporate imperatives often determine research agendas and trajectories and thus, in the end, the praxis of science (cf. the role of the pharmaceutical industry in contemporary medicine as examined by David Healy, Marcia Angell, and Howard Brody, among others).

Posted by: Patrick S. O'Donnell | Oct 24, 2008 9:32:09 AM

Interesting post. I won't tackle the merits, but wanted to point out three analogues:
1. Gregg Easterbrook frequently writes in his TMQ column that coaches make cowardly decisions (punting, kicking the field goal, etc.) because of culture - trying it differently and failing has adverse consequences while sticking with the norm does not, despite objective detriment to the team for not "going for it."

2. Taleb addresses this phenomenon in The Black Swan - the person who worries about the highly improbable, though devastating, event is usually ostracized as crazy.

3. At least one television show depicts these issues regularly: Eli Stone. Eli is a lawyer that has visions that turn out to be true. Last season he predicted an earthquake, and tried to force (by court order) the city to close the golden gate bridge. He enlisted as his expert an earthquake specialist who had unsuccessfully predicted other earthquakes and thus was disregarded here (of course, it did happen). This happens almost weekly on the show, and the judges struggle with whether to go on virtually no evidence. Interestingly (and a reason why I like the show as unusual), Eli almost never wins in court - the judges want evidence and he has none. He usually winds up appealing to non-judicial powers that be (mayors, etc.) who are more willing to take a chance on faith. It seems that the black hole lawsuit could be written right into that show.

Posted by: Michael Risch | Oct 24, 2008 10:21:52 AM

Patrick seems correct that courts are likely not up to the tasks you propose. Most importantly, the same flawed risk-analysis applies to the judicial decision-maker. A judge asked to issue a preliminary injunction faces the same decision matrix as your hypothetical physicist, only exaggerated by the lack of relevant expertise.

The judge who blocks the LHC based on the claim of a few people holding a minority view that there will be a very small probability of the end of the world is certain to be sharply criticized and will likely gain a permanent reputation as an obstacle of scientific progress. The judge who allows the LHC will be largely unnoticed and otherwise deemed reasonable, or alternatively, will cease to exist along with all of her would-be critics.

Posted by: JP | Oct 27, 2008 2:44:13 PM

Whoa. There is no comparison between engineers making a bad judgment on something that could obviously be a problem (ice on the Shuttle) and particle physicists who know their subject definitively refuting some non-particle physicists (Plaga and Rössler) who advance *obviously incorrect calculations* in support of their (mutually exclusive, mind you) doomsday scenarios. Yes, experts can make mistakes in hasty estimations of what might happen to complex systems under certain physical conditions, when their information is incomplete. But physics events that violate basic, thoroughly established physical principles are not going to occur.

I recommend my blog post Large Hadron Collider: What's the Risk? for more on the very eccentric Plaga and Rössler.

Posted by: onscrn | Oct 30, 2008 3:55:01 PM

hello everybody
I have read the article of Dr. Johnson with great interest, ans also all the comments. First, I would like to comment dedicatedly to some of the comments, then secondly putting forward the general issue: I think that Johnsons solution is not complete, perhaps even flawed in the same way as the arguments of the pro-LHC scientists.

To Grimmelmann: you write "it's going to be socially and rhetorically unacceptable for judges to make decisions that forthrightly admit they don't understand the science involved." You do not understand, that this is the problem Johnson tried to solve. Delegating this question to experts does not work anymore. Thus, Johnson applied a figure which is known as the "Differential" (acc. to G.Deleuze), he tried to find a common point of departure, a position where things can be formulated in a common language. Unless you do not achieve that, you can not proceed discussing to each other.

To onscrn: Plaga has not been refuted by Giddings & Co so far.

To JP: correctness, mis-believes, superstition is not a question of majority, and not a question of the domain, as you say. Obviously you do not understand that your thinking of a possible accusation of being "obstacle of scientific progress" is heavily bound by mistaken values. You implicitely claim that "not knowing everything" imposes a normative imperative to act so as to fill the proposed gap of knowledge. While generally better knowledge can be used to improve the general situation, there are plenty of situations demonstrating that the opposite may be created first. The only normative imperative which is acceptable is to conceive oneself as a part of a ongoing historical narration, which everybody has to try to extend. You can derive everathing else from that, even Kant's, Hume's, Berkeley's, Rorty's position are only a special case of that.

Well after those dedicated remarks now the general, more important one.
The point to make is a very simple one (you may read the extended version on this on my mall blog at
http://lhc.blogsite.org/index.php?option=com_content&view=article&id=47&Itemid=74 ).

Any judge dealing with this case would not have to deal with any physical argument at all. Given a particular scientific domain (which i case of the LHC is physics), we can observe (as a judge) a particular body of theories. Usually these theories are incomplete, inconsistent, partially true (to use a term of da Costa). This is also the case in ase of LHC. There you find even several (around 4..6) theoretical frameworks, which are contradicting each other at least partially. Usually, we have to perform an experiment. It is now important to understand, that this is not limited to science. we all have to perform experiments all day long, but usually we use decorums (Sloterdijk), traditions, believes and son on, to avoid experimenting and rather to draw on the collective experience. What now, if such an experience could potentially (nobody knows if possibly it will) harm another person? Or 100? or 100'000 (Geneva), or 100 mill. (third of Europe)? Even if only 1 (ONE) could be harmed we usually will decide that such an everyday experiment needs the written consent of those being acted upon in that respective experiment. Exactly this is the situation in clinical trials, especially Phase III trials. Everyday experiments are rarely completely out of theoretical scope. We apply analogies, simila experiences, etc. But Hume warned us 270+ years ago that we cannot extrapolate into the future. Ethically spoken, We have to take care. Legally spoken, we are
responsible, usually, except the CERN due to its unbelievable status of being out of any public control.
The case with the experiment at CERN is such, however, - and thats the second core of my argument - that the logical structure is circular. They all together misconduct into a petitio principii. Any of the safety calculations, based on the theories currently available are wrong up to an completely unknwon extent. Remember: Due to this ignorance they are about to perform this experiment. The success of any of the theories can NOT be extrapolated, because the experiment crosses a qualitative border, a symmetry break, they expect to create states of matter seen before at another accelerator, the RHIC, the so-called Quark-Gluon-Plasmas, but at a much higher energy. In a state of total ignorance about what will happen, the probability that the outcome will be malicious, is 0.5.
Back to the situation of experimentation. As any experiment, the experiment at LHC is intentional. There is a risk based on the state of ignorance, which the physicists all readily concede. They argue that this ignorance will not be dangerous, but this is circular, as we have seen, and for this conclusion we do NOT need a deeper understanding of physics. Other people could be seriously harmed, if not killed. So any judge only has to decide about how to deal with ignorance in a globalized experiment, suffering as any experiment else from the Humean problem of induction. The judge need not to be a physicist, but to be a philosopher. And that's not so rare, I suppose.
After all, what to do with this messy situation? I would recommend to talk to each other. Exactly this is not possible, because CERN is denying any attempt for a permanent security conference. They can do so because they are out of public control, and this is a political problem.
There are a few, very reasonable and practical steps to turn the potentially desastrous situation into a beneficial one:
1. a moratorium for 2 years
2. setting up permanent security conference hosted by the UN
3. setting up political control and transparent financials
4. interdisciplinary academic discourse about the re-internalization
of risks into any action
5. seeking ways to gain the kowledge on different paths
(astrophysical research, like the Planck satellite by ESA)

Generally, the situation is much like in human genetics, the only problem being that physicists are jealously pressing forward and time is running out. What is indeed needed is a brave judge.

Nico Martirelli

Posted by: nico martirelli, Switzerland | Nov 10, 2008 11:00:34 AM

Post a comment