« Agency Practice and Agency Statutory Interpretation | Main | What Explains the Trend? »

Saturday, July 04, 2015

Wine, Soda Pop, and Law Schools - More on "Law Review Lift (Drag)"

Some time this month I will get to a relatively more serious topic, like textual opportunism, but for right now I'm still fiddling around with Al Brophy's ranking system.  

So 10+ that I don't bury the lead, let me say up front that I have played some simple-minded statistical games with Al's data.  What I come up with is that, among academics, "brand," as with soda pop, means a lot, and it is relatively sticky and independent of what is going on with the students.

I also think it's pretty obvious that there is a relationship between the "brand" and student data (i.e. high correlations between any ranking system and LSAT scores, for example). What got me interested, however, as I noted a few days ago, was the differential when Al included or didn't include a different and interesting stat: how often the school's main law review (not its faculty) got cited. My intuition is that what other profs think about placing articles in a school's review (based on my own experience) is a lot like the peer reputation score, except that it does measure a revealed preference (i.e., when you rank "peer reputation" as a participant in USNWR, it doesn't cash out to anything; placing an article does!)

The problem with all of these systems, in which we are "ranking" something with many complex factors (like wine) is that the judgment is qualitative, even if it looks quantitative. Often it's qualitative simply because it's qualitative (e.g., "peer reputation"), but even when it's fully quantitative it's qualitative because of the judgments one makes in weighting the quantitative factors.  I was once a partner in a big law firm. Our partnership agreement called for compensation to be determined by a committee, which in turn used a list of factors like "billable hours," "service to the firm," "client responsibility," etc. Every two years the committee turned out a ranking that set your compensation relative to all the other partners. Similarly, if you aren't a hermit during early March of each year, you hear about a double ultra secret committee in Indianapolis deciding which of the "bubble teams" gets into the NCAA basketball tournament. Same thing.  Recent results? Body of work? Bad losses? Good wins?

In any event, I played with Al's data and made some scatter plots and regressions in Excel, all of which follow the break.

20+I should note that I ran my little exercise by one of the toughest critics of empirical work I know, not for an endorsement, but to see if it was okay to "bin" the data into that 10+, 20+, and 30+ differentials between Al's 2 variable and 3 variable results. My interlocutor (who will remain nameless to protect the innocent) said that binning was okay if there was some theory behind it, but his or her very, very fulsome and thoughtful reply to my question reaffirmed my belief that data without judgment is blind (and judgment without data is empty, to be fair, in each case paraphrasing Kant). The big issue is whether just a few outliers are responsible for the outcomes (which you can see by eyeballing the scatter plots). That may be true here. So with that disclaimer, and recognizing this is a blog post, for God's sake, and not a peer reviewed research paper, here's what I came up with.


If you plot law review "lift (drag)" of 10+, you come up with a positive correlation to law review volume number (.339).  See chart above the break. 30+

If you do the same for "lift (drag)" of 20+ and 30+, you come up with even higher correlations, .42 and .55, respectively.  (See above left and right.)

What do I conclude? Probably nothing more than common sense would tell me: "brand" makes a difference; it takes a long time to develop one; and once you have it established, it sticks around enough to bias other data.

Posted by Jeff Lipshaw on July 4, 2015 at 03:39 PM in Article Spotlight, Life of Law Schools, Lipshaw | Permalink

Comments

What makes this ranking system even more fantastical is that we know that citations don't mean a whole lot. From the Harrison and Mashburn article that stirred things up not long ago:

"First, were the instances in which the cited work is actually mentioned or discussed in the text in manner that makes it clear that the author is responding to or building on the prior work. For convenience this is called "substantive reliance." Second, there were instances in which a cited work was noted because it included a factual statement or an opinion and was referenced by the author, but the cited work did not appear to play a role otherwise. Many of these cites were a version of hearsay in that the author was asserting a fact or an opinion and supported it by noting that someone else had made the same assertion. It is hard to view these cites as true authority for the statement made by the author, but the practice is common among legal scholars. The final group was composed of instances in which it was difficult to connect the citation in any substantive way to the work of the author or to any specific assertion for which is was authority. "Casual notation" is an accurate label for this classification.

"In the survey of one hundred, two citations fell in the "substantive reliance" group. The 98 remaining citations fell evenly within the second two categories. The line between these two categories was difficult to draw and another person analyzing the data or even a second analysis by the current researchers could result in a different count. Nevertheless, virtually all of citations examined fell into the hearsay or casual citation categories. It was rare to find an author who engaged the material found in the cited work."

I was going to quote that as a criticism of Al's ranking scheme, but I think it actually supports him in a way. I think ranking based on faculty reputation is silly, since what other professors (at other schools) think of you has little or nothing to do with how good of a professor you are. But if you want to just measure reputation without being muddied by concerns over actual quality, wow oh wow, law journal citations seem to nail it. Second best would be something along the lines of Facebook likes, or maybe /r/circlejuris.

Posted by: Derek Tokaz | Jul 5, 2015 7:31:51 AM

Post a comment