« Every snowflake is different | Main | Notre Dame Law School hiring a Director of Trial Advocacy Program »

Monday, August 13, 2018

Submission Angsting and the Availability Heuristic

Slide1I have not participated in the bi-annual feeding frenzy known as the student-edited law review submission season in several years.  I may this year, plus I'm blogging, so it's hard not to read the comments on "submission angsting" post (NB: autocorrect kept changing it to "submission ingesting" which I think is clever.)

This is a curmudgeonly but data-based contribution in aid of the reduction of angst. I vaguely recall posting something like this eleven or twelve years ago, no doubt when many young law professors or aspiring law professors were still in high school.  I direct it to those of you readers angsting significantly between, say, placements in law reviews at school ranked 65 versus school ranked 75.  Or some such similar consideration.

Paul Caron over at Tax Prof Blog does us the community service every year of re-ranking the schools by their "peer assessment" number, which ranges from 1.1 at the low end to 4.8 at the top.  I am assuming for this exercise that the peer assessment is meaningful even though I have my doubts.

My doubts stem largely from the likelihood that so much of this is determined by the availability heuristic, the term coined by Tversky and Kahneman for a mental strategy in which people make judgments about probability, frequency, or extremity based on the ease with which and the amount of information that can be brought to mind.  Hence, we bias our judgments based on available information.

Having said that, here goes.  One of the most available pieces of information is the linear ranking in US News.  It's really available.  It's available to the people who send in their votes for peer ranking and it's available to authors trying to place their articles.  What is not so available (thank you Paul) because you have to pay to get it isn't just the re-ranking by peer assessment but the actual peer score.

The histogram above shows the peer assessment scores from the 2019 US News law school ranking by the number of schools at each peer score from 1.1 to 4.8.  You can draw your own conclusions, but I think trying to thin-slice differences between scores close to each other is kind of silly.  It's pretty clear that whatever peer assessment means, the top 17 are in their own world.  As between 18 and 50, yeah, maybe there's difference between 18 and 50, but I wouldn't get too worked about about the difference between 30 and 40.  That effect is even more dramatic in the 50-100 range.  The point is that the rankings are linear, but the actual data sits on a curve.  So the differences between linear rankings mean different things at different levels.  (I'm pretty sure re-grouping the data in other significant categories like entering LSAT score would yield similar results.)

It's why I find it, what?, sad? odd? unthoughtful? when schools get lauded or dinged for moving eight or ten places one way or another between about 50 and 125.  Yes, the data are meaningful when you jump from 105 to 18 or vice versa.  But not when you "sank" from 50 to 62.

Okay, that's it.  Back to our regularly scheduled blogging.

UPDATE:  I'm going to close the comments here.  If this merits any discussion, it probably ought to occur at the angsting post.  

Posted by Jeff Lipshaw on August 13, 2018 at 03:12 PM in Getting a Job on the Law Teaching Market, Life of Law Schools, Lipshaw | Permalink

Comments

The comments to this entry are closed.