« Valuing Deaths Differently | Main | 15th Anniversary of the Rwandan Genocide »

Tuesday, April 07, 2009

The Technical Revolution in Empirical Analysis

I want to start my discussion of the dangers empirical work in law (and the social sciences more generally) faces by looking at how we got to where we are today, and what it means for the future.

Empirical analysis as we think of it today is a remarkably young field. The regression line did not exist until Galton's pioneering work in the late 1800s. And it was not until the 1920s that RA Fisher developed the first randomized clinical trial. Until the late 1800s, the sciences were dominated by fields that could rely on direct observation (or direct observation aided by physical instruments such as telescopes and microscopes). Experimentation and statistical analysis are new ideas in the history of science.

But even this overstates the age of rigorous applied statistics. Statistics at any large level simply cannot be done by hand. A regression involving more than about seven or eight, possibly even as few as five or six, explanatory variables is simply impossible to do by hand. And by impossible I mean that it would take longer than a human lifespan to make the necessary calculations (to invert the matrices). And while primative calculating machines existed as early as the turn of the (last) century, these were not strong enough to permit large-scale analysis.

So it takes a computer, which moves us quickly to the 1940s, with the creation of the Colossi in the UK and, more important, ENIAC in the US (more important since ENIAC was more general purpose--the Colossi were dedicated code-breakers). The mainframe and the punchcard came shortly thereafter.

At this point, we start to see something along the lines of exponential growth: the mainframe gives way to the desktop (starting in the 1970s), and the desktop of the 1970s evolves into the desktop or laptop of today. How significant have these changes been? A few facts:

1. Processing power. The Apple IIe I had in grade school was a warhorse of a machine, with a 1 MHz processor. My laptop today is good, but not at the highest end of the personal computing scale, and it has a 1.86 GHz processors, a 2000-fold increase in power. Moore's Law predicts that processing power doubles every two years, and it still seems to hold. We see the same improvement in RAM: my IIe was a beast with 256 kB of RAM, and my laptop is only respectable with 2 GB of ram, a 9000-fold increase.

2. Memory. Mainframes required punchcards, and a given punchcard could hold 80 bytes of data. To put that in comparison: the 5-1/4" floppy disks that everyone used in the 1980s and early 1990s could hold about 1.2 MB, a 15,000-fold improvement. Today I have in my pocket a 2 GB flash-drive smaller than my thumb. Given that each punchcard was about 0.007 inches thick, it would take a pile of punchcards almost three miles high to hold that much information. Or, put differently, my flash drive hold 25 million times more data than a punchcard. 

Not surprisingly, we have more and more data at our (ready) disposal. With the rise of the massive dataset--observations running to the millions if not tens of millions of observations--comes the ability to make increasingly sharp and significant discoveries.

3. Ease of access. Mainframes were shared computers: everyone had to compete with everyone else for limited time. The desktop democratized the process, allowing everyone to have continuous access to a computer, and at affordable levels. 

4. Software improvements. In the time of the mainframe, analysts generally had to write their own code. This introduces a huge barrier to entry, not just in terms of time but in terms of training. Even the statistics packages of ten or fifteen years ago were clunky (just ask someone about SAS or Gauss). Programs like Stata and SPSS make it easy for anyone to run regressions.

Taken together, these changes do not add up to some sort of incremental change but a radical transformation in our ability to engage in empirical analysis. There is statistics before the 1970s and 1980s, and statistics after. I think an argument can be made that the technological revolution introduced by the computer is the most important scientific advance since the "Scientific Revolution" started in the 1600s.

What have been the implications of this revolution? Here are three critical ones.

1. The Actuarial Turn. Starting in the 1960s, there has been a strong move away from relying on clinical judgment and towards using actuarial models. Only with the development of large-scale empirical models did we have the information necessary to appreciate the limitations in human judgment and to test and validate more rigorous actuarial alternatives. This attitudinal shift is central to an argument I'll make later, that ELS and the empirical social sciences need to adopt a more evidence-based approach; the evidence-based movement in many ways is another manifestation of the general shift towards actuarialism. (Given my embrace of actuarialism here, I should take a moment to remind Bernard Harcourt, who was on my dissertation committee when I was a graduate student, that he can't unsign my forms now!)

2. The Risk Society. Sociologists and other social scientists have noted that over the past three decades our views on what the government ought to do have changed; a key shift has been our growing desire that the government protect us from risk. David Garland and Jonathan Simon have written about this in the crime context, but as Cass Sunstein and others point out it has been a more widespread shift, including areas such as environmental regulation, health care, and so on. Part of the reason could be a loss of faith in expertise and the government's ability to provide for us, as Garland suggest, and part of it could me economic maturity (once we're wealthy enough we favor protection over growth). 

But while I do not yet have any evidence along these lines and have only just started to think about this point, it seems to me that the technological revolution must play an important role here as well. We simply know so much more about what can hurt us. It is impossible to detect the effect of minor toxins, even if the effect size is substantial, without large datasets and powerful computers. With more powerful tools comes more awareness of the risks we face, and thus the demands we have for protection. For example, I would not be surprised to find evidence that the rise of toxic tort cases--and thus the rise of the use of complex scientific evidence in court--closely tracks the rise of the computer.

These two trends then lead to the third:

3. The Use and Abuse of Empirical Evidence. Thanks to powerful computers and easy-to-use software, good empirical work has never been better. But thanks to powerful computers and easy-to-use software, bad empirical work has never been worse. And the bad work comes from two directions. In some cases the problem is naivete. Researchers with little formal training, but with a decent desktop and a site-license for Stata, start to put together empirical projects. They are (seemingly) easy to do, and "but do you have data to support your claim?" is a question asked with growing frequency. Oftentimes these papers suffer from fundamental empirical flaws. This is a particular problem in ELS, since there is no peer review check in many cases.

Perhaps more troubling is the cynical manipulation of empirical work. Advocates and advocacy groups frequently appear to manipulate results or to adopt methods that ensure (as best as possible) that particular results arise. Such sins are committed on both side of many debates; no-one has a monopoly on such behavior. And with the rise of a more data-driven society (a function of both the actuarial turn and the evolution of the risk society), the returns on having at least the veneer of empirical support are rising, and the technological revolution is driving down the costs of producing such results; together, these trends only encouraging more and more statistical abuse of this sort.

On top of all this, even if we focus just on the sincerely-developed, high-quality work, problems are starting to arise. There is simply more and more of it, and we lack the tools to extract the big picture from them. This is a point I will return to repeatedly.

Someone with a better understanding of the Bible can likely see several parallels here: making it to the Promised Land (powerful computers with huge datasets), but engaging in behavior that gets us exiled from it. We have never been in a better position to fundamentally understand how the world operates, but we have never had more chaff obscuring the wheat.

I want to end, however, on an important optimistic note. Why I have stressed just how new all this is? To make clear that it is not surprising how ill-prepared we are for what we are facing. That we  do not have effective means for separating good studies from bad and synthesizing the findings of the good is not the result of decades of failure or indifference (and by "we" here I mean social scientists, since epidemiologists and medical researchers have started down this path during the past decade). It is just the result of not responding quickly to a new problem. We should view this as an opportunity, not as a reason to despair.

Likewise with the law. As Tal Golan, Edward Cheng and others have pointed out, the history of efforts to reform how the law uses scientific evidence has been a history of failed reform. That does not mean we should give up hope now and just accept our current procedures. The changes of the past thirty years dwarf those of the preceding three hundred years. The law does change in response to profound shocks, and I think that is what we are seeing here.

To get on the right path, however, social scientists have to rethink how they produce empirical knowledge. This is the point I will address next.


Posted by John Pfaff on April 7, 2009 at 09:38 AM | Permalink


TrackBack URL for this entry:

Listed below are links to weblogs that reference The Technical Revolution in Empirical Analysis:


As someone with a PhD, I find that some law professors think all that is required to do an empirical project is a college level stats class taken 10 years ago plus an RA to do the analysis. On such a project the law professor has lost control of the project. They don’t know what they don’t know and the uncredited RA really is the one doing the analysis. I have seen many studies in law reviews doing regression analysis that are done poorly because none of the authors are skilled in statistics. Someone who is crim law would not think to write a family law paper out of the blue; the point is even stronger with empirical analysis since it requires a totally different skill set.

Law professors who don’t have the skills should not engage in these projects without an empirical coauthor. Skills in empirical training is not the code to run a regression in stata; it’s understanding how models are done, a basic understanding of how the math is done (even though you rely on a computer to do it- you can do simple regression by hand) and what the limitations are of your analysis as well as the assumptions.

Posted by: Anon | Jun 23, 2019 12:59:50 PM

Ir a cantabria

Posted by: More | Jun 22, 2019 2:14:50 PM

great post! Keep up the great work!!

Posted by: Annabelle Smyth | Nov 23, 2015 4:45:45 PM

How about this first step: mainstream law reviews should stop publishing empirical works, and the faculty should get in the habit of "discounting" the quality of empirical works published in mainstream law reviews to move things along in this direction.

Posted by: Law Review | Apr 8, 2009 3:11:40 AM

John, a good starting point for a critique of "empirical work in law" is the ELS assumption that empirical research is synonymous with quantitative research. This reflects the overwhelming quant bias in political science. But even poli sci quants know that empirical research includes qualitative work. And folks in other relevant correlate fields - like sociology - understand that qual research is essential to any thick account of what the world is like. It seems to me that empirical work is incredibly valuable, but good empirical work is rigorous, methodologically diverse, and self-conscious of its own limits.

Posted by: Dan Filler | Apr 7, 2009 10:23:39 PM

BDG: Both your examples work. Failing to include a key variable or failing to model a structural problem with the model (like endogeneity) are definitely signs of "bad" work. So too is failing to look closely at the data to appreciate, and control for, their defects. And so too is failing to think carefully about what your model is actually saying.

(Here's an example of the last concern. Some people have looked at the effect of crime in year t on the prison population in year t. But that model probably tells us very little, since the prison population in year t is a function of crime in t, t-1, t-2, t-3, ..., and a function of all those years in some sort of complex way that the simple model cannot catch.)

After that it gets tricky. What constitutes "high quality" work is something that the social sciences really never ask, but it is a question that the evidence based policy movement is forcing empiricists to consider. I'll have a future post about quality that will tackle this in more depth.

Posted by: John Pfaff | Apr 7, 2009 5:55:15 PM

John, on behalf of non-empiricists with some smattering of statistical training everywhere, can I ask you to be more specific when you say you see a lot of "bad" empirical scholarship? I don't want you to name names, but in your view, what distinguishes good from bad empirical work? Is it awareness of the limits of some kinds of data? Use or failure to use meaningful controls?

Posted by: BDG | Apr 7, 2009 5:15:04 PM

The comments to this entry are closed.