Thursday, September 12, 2013
Visualizing SCOTUS Doctrine III - Network Analysis Compared
In the first two posts of this series, I discussed the basics of the SCOTUS Mapping Project and its method of looking to dissents to uncover the competing traditions at play in the Court's contested doctrines. In today's installment, I explore how this project compares with other scholarly and technological approaches to analysis of legal citation networks. I hope both to put the project in context and to give Readers potentially interested in in making maps a sense of the process involved.
Let's start with a little background. Prawfs fans may or may not be familiar with the fairly large and robust literature on legal "network analysis." (For a good survey article see here; for an influential example see here). A basic premise of this literature is that we can better understand the importance and influence of precedents (including, but not limited to Supreme Court precedents) by analyzing the patterns of their subsequent citation in networks of interrelated opinions. People who write in this field often create sophisticated mathematical models and generate alogorithms to be executed by computers parsing large volumes of case data.
I'll stress right off the bat that the SCOTUS Mapping Project is not an algorithm-driven or computer-automated undertaking. Rather, the Mapper software enables users to visualize doctrine after they conduct thier own close readings of cases. At the same time, the project derives inspiration from network analysis -- I thus fully expect human cartographers using the Mapper software to look for citation patterns just as computers are trained to do in network analysis.
The difference is quantity versus quality. While computers can quickly sift through reams of text to identify co-occuring words and phrases across thousands of cases, people can make deeper conceptual and thematic connections between opinions based on synthesized understanding. People can both interpret text and read "between the lines" of opinions in ways that computers, at least for now, cannot. Visualizations engineered by network analyses will not produce the same perspective as visualizations created using the Mapper software and method.
The best way to illustrate this difference is via a concrete example. Consider then the Supreme Court's doctrine regarding stare decisis. This is the Court's "precedent about precedent" -- pronouncements about when the Court may (or may not) overrule its own prior decisions. How might we visualize the most critical Court opinions in this area? First, let's take a glimpse an image produced using applied network analysis -- courtesty of the geniuses over at Ravel Law.
Figure 1 (Full size image)
I created this image by visiting Ravel Law's brilliant website and typing in "stare decisis." The software found 6678 cases with this phrase and displayed the top 75 most relevant. This relevance was presumably calculated based (in part) on the number of subsequent citations to found cases by other cases in the network. I then moved my cursor over the earliest cases in the network, which also happened to be one of the largest bubbles (meaning most cited), and it highlighted 1932's Burnet v. Coronado Oil & Gas Co. The software automatically highlighted other cases that cited Burnet. Then I took a screen shot.
The power and speed of the Ravel visualization is profound. It takes mere seconds to produce. It is dynamic (changes depending on where you put your cursor) and hyper-linked to the the cases themselves. I am a huge fan and insist my 1L students learn to use the site. Yet I regard Ravel more as a tool for legal research than one for doctrinal mapping. Its network analyses efficiently guide human users to oft-cited cases. However, the visualization does not reveal the character of the cites -- whether they signal agreement or diagreement, extension of prior case's reasoning or sharp and damning distinction. To recognize these kinds of doctrinal fault lines, a human reader is still required. And to visualize these fault lines, the Mapper has advantages.
Map 7 (Full size image)
Map 7 represents the Court's "precedent about precedent" in an different way. Note first that 1932's Burnet also has a prominent place in the visualization. However, from this map it becomes immediately apparent that it is Justice Brandeis' dissent from that case that is important. Furthermore, the legend reveals that Brandeis' Burnet opinion adovcated overruling a challenged precedent -- as do all the opinions in red. On the other hand, the blue opinions represent cases that advocated affirming prior precedent. The map thus suggests the existence of competing schools about how to approach stare decisis and charts the debate up through the recent clash in 2010's Citizens United.
As I've said in prior posts, I do not seek in this forum to engage in substantive discussion about particular doctrines. (For those interested in my take on "precdent about precedent", see here and here too). Rather, the point is to suggest that, as a matter of form, the scheme behind Map 7 permits users to draw a picture of the lines of agreement and disagreement in doctrine that cannot be represented in a conventional network analysis.
Of course, the automated nature of network analysis also has advantages. The biggest one is time. While the Ravel image took me seconds to generate, Map 7 took many hours. To uncover the competing lines, I first examined the stare decisis debate between the majority and dissenting opinions in Citizens United. Then, I pulled all the opinions those competing opinions cited and examined them. I repeated this process back over 100 years. Meanwhile, I noted all the opinion authors, tallied the votes their opinions received, and coded all the opinions by whether they were arguments for or against overruling. I read and re-read closely in order to ascertain which opinions actually fit together and which were opposed.
Throughout this process, I used the Mapper to help me organize my understanding of the connections between cases. I edited and re-edited maps and tried different looks. This is key. Mapping is an iterative process. Map 7 obviously has far fewer data points represented than does the Ravel visualization. Yet this is the result of a deliberate attempt to distill the relevant doctrine down to its essentials. Indeed, the value of editing can be seen by taking a peek at my "unedited" map of this same territory.
Map 7.a (Full size image)
To my eyes, the map above is far too busy to be useful. While it represents more of the Court's stare decisis opinions, it does not capture the essential doctrinal dialectic as sharply as Map 7. This brings me to a final observation. The Maps are not the Territory. Doctrinal maps do not purport to depict the whole of any given doctrine so much as sketch its essential competing lines. The point is not to provide exhaustive detail of the terrain but rather to produce a useful guide to the key landmarks and contested boundaries.
In the end, doctrinal mapping involves far more editorial decision-making than network analysis. This is not a bad thing. Legal advocacy, scholarship and teaching all implicate editorial choices. So does judicial opinion writing. Making arguments is what we do as lawyers. Maps just provide a way to make the argument about doctrine and communicate it efficiently to an audience.
That's it for this time. In my next post, I hope to discuss ways doctrinal maps might be used in the law school classroom. I'll also provide some more details on the stipends available for folks who may have interest in creating their own maps and contributing to the project library. As always, comments, questions, or critique is welcome. Thanks for listening and stay tuned!
Posted by Colin Starger on September 12, 2013 at 07:54 AM | Permalink
TrackBack URL for this entry:
Listed below are links to weblogs that reference Visualizing SCOTUS Doctrine III - Network Analysis Compared:
Fascinating. I assume you are aware of Karl Manheim's constitutional flowcharts ... http://classes.lls.edu/fall2006/conlaw2-manheim/charts.html
Not the same thing exactly.
Forgive my lack of a JD, but how is what you are doing different from Shepardizing or KeyCiting?
Posted by: John Mayer | Sep 13, 2013 1:49:40 AM
Thanks John. I had not seen Karl Manheim's flowcharts before, so I appreciate your including the link. I agree that doctrinal maps are not quite the same since they focus on doctrinal evolution -- the opinions themselves and the connections between them. Though the flowcharts sometimes include case references, their emphasis seems to be conceptual rather than doctrinal.
I hadn't thought of the mapping project as similar to Shepardizing or KeyCiting and it is a very interesting observation.. Certainly, the projects share a concern with authority and involve a process based close reading to track the fate of a proposition over time. Yet I see differences in both the form and substance of the projects. Formally, neither Shepards not KeyCite make visualization central to the presentation of authority. The snapshot you get from them is extremely useful to check the current validity of authority but it does not present a picture of the evolution of the authority over time.
Substantively, Shepards and KeyCite approach authority from the perspective of "what is good current law" -- the perspective of the majority. So, they do not have a first-instance concern with dissent. You can't use Shepards or KeyCite to determine whether a dissent has been redeemed. These services don't try to capture the dialectic in doctrine so much as describe accurately current validity.
Of course, Shepards and KeyCite do what they do incredibily well and current validity is what 99% of lawyers will care about when looking at a case. But for those looking to understand a whole area of the Court's doctrine in depth, my project will have a different appeal. It aims to help teach aboutthe history of particular doctrines thorugh the lens of arguments that define them . It has a dialectical approach.
I hope that makes some sense. Thank you again for your comment and all the best.
Posted by: Colin Starger | Sep 13, 2013 12:59:36 PM
Thanks for that explanation - very much clarified things for me. Your response also brings to mind the work of Colin Tapper. I believe he is retired, but he wrote a lot about similar stuff. The best citation I could find online is ...
I don't have a Hein account, but you might via your law library. He is worth looking into as I remember seeing him talk about using automated citation analysis to determine precedent - sort of like an automatic Shepards, but he was into looking beyond citations too.
A more current place to look is Fastcase...
...who are also looking at automatic citation analysis or some such. I haven't played with it, but have seem impressive demos.
Finally, you should be aware of PreCYdent.com (now defunct) as it purported to use Page-Rank-like/Google algorithms applied to the text of caselaw to find the best cases. Here's an interview with the founder...
...who is/was? a law professor and might be still around to talk to.
My interest stems from my work with CALI (see freelawreporter.org), but also because I wrote a paper during my Masters studies called "The Case Color Wheel" where I designed a visual interface for citation analysis that used concentric circles around a case where parts of the circle were different colors based on whether the case supported or overturned the case on the inner ring. It was kind of a heatmap + circular slide rule concept. I never coded it, just proposed the design in the paper. This was back in 1991 and based on a lot of Colin Tapper's work.
Posted by: John Mayer | Sep 13, 2013 4:43:49 PM
Fantastic John, thanks. I love your "Case Color Wheel" concept. Did you have a way in mind to automate the analysis as to whether an outer-ring case supported or overturned an inner-ring case? Or did you imagine a human was required for that step? So far, I've not seen NLP intelligence build into computer-driven network analyses that can pull off the task of determining whether a legal rule from a prior case is being affirmed/rejected or extended/distinguished. It's just hard to do.
Posted by: Colin Starger | Sep 13, 2013 10:41:41 PM