« Fixing State Finances Without Bailouts | Main | The Public/Pubic Pitfall and Other Non-typo Typos »

Friday, April 09, 2010

US News and Rationally Ranking Law Schools

Over at Concurring Opinions, Prof. Solove asked tough questions about how deans are supposed to rank law schools on a 1 to 5 scale when there are so many schools and certainly there are 5 schools that are demonstrably better than each other and other schools. A rational, all-knowing dean would run out of numbers pretty quickly. I joke in a comment that third and fourth tier schools are stuck with negative reputation/quality scores.

Robert Morse responded that the rankings match up with the level of knowledge deans have, and that the rankings come out right in the aggregate.  Prof. Solove responded with questions about whether that means that outliers and gaming drive the rankings.

I, in turn, posted a comment on that thread, but thought it worthy of discussing further in a separate post while I have the opportunity to do so here. In short, I think Morse has it right, and the likelihood of gaming or outlier effect is low.


I tend to think it is difficult to tell the difference between the quality of JD programs (which, I believe, is the new question - not reputation). If this is the case, there is an argument that the 5 point scale is exactly what you want in terms of granularity. 

What the coarseness means is that deans aren’t being asked to put schools in order. They are being asked to put them in groups. Top 20%, Second 20%, etc. It is the variations among the groupings from a large sample that creates the ordering. If 90% of the voters put a school in the top 20%, and 10% put it in the second 20%, then that school will receive a higher score than the school with an 80/20 split.

The beauty is that voters don’t have to choose an ordering between two (obviously close) schools. Indeed, complaints that voters simply lack the information to do so is answered somewhat. Voters may not be able to rank in order, but perhaps they are better able to rank in groups (though even that is somewhat debatable - an issue I will leave for others to argue).

After all – each of the top 10 programs are all outstanding – give them all 5’s. 10-40 – well they’re pretty good, so 4, with maybe some fives. 40-100 – “average” – so 3, with some 4 and some 2. Third tier? All about a 2. Fourth Tier? 1 (ouch). Indeed, voters need not put schools into 20 percentile groups - perhaps all third and fourth tier schools are 2's - then all wind up being in the lowest grouping for that voter.

As a result, it is only those that would put any particular school into a completely different category that determine the ordering. 

So, among the schools listed in Prof. Solove's post, I suspect that the groupings look like this:
Yale 5 (top 20%)
Michigan 5 (top 20%, with a few putting in second 20%)
Cornell 5 (top 20%, with some putting in second 20%)
USC 4 (top 40%, with some putting in top 20%)
Emory 4 (top 40%, with few putting in top 20%)
American 3/4 (top 60%, with many putting in top 40%)

And so forth. This, I think is not unreasonable, and could yield more robust results than would a straight ranking of 180 schools because voters are not expected to have perfect knowledge.

Posted by Michael Risch on April 9, 2010 at 02:50 PM in Life of Law Schools | Permalink

TrackBack

TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a00d8341c6a7953ef01347fc33b4a970c

Listed below are links to weblogs that reference US News and Rationally Ranking Law Schools:

Comments

You would be right, anon, if we thought deans answer the question that US News now asks -- about the quality of JD programs. Unfortunately, barring experience at another school, deans have very little information about things other than reputation when it comes to assessing the quality of other JD programs. So regardless of the question asked, what we are left with is a measure of schools' reputation, not a direct measure of how peers evaluate the quality of education conferred by a JD program.

Posted by: anonthesecond | Apr 9, 2010 10:12:32 PM

It seems like Solove is just reifying the rankings order. There are eminently plausible and defensible grounds to rank schools lower than where they have ranked forever. For example, lots of practicing lawyers complain that Yale fails to teach students the doctrinal law or how to research and write in a practical setting. Is it unreasonable (which is different than saying "patently off the norm") to rank Yale at a 4 or even lower because you honestly believe that the quality of their JD curriculum (which is the standard now, as Risch points out) is below other schools that have more practical or at least more varied training? On the academic side, it doesn't seem implausible that someone could conclude that Yale professors (or Michigan professors or Duke professors etc) are overrated and living off of past reputations. You would throw out the score of anyone who makes that judgement? That seems like the naive argument of the old XOXO posters - anything that deviates from the current group think can't be right.

Of course, that doesn't mean I would rank Yale or any of those schools below where they currently rate, but it seems rash to discard that possibility out of hand.

Posted by: anon | Apr 9, 2010 7:35:11 PM

"Whether they get 3s and 2s seems somewhat arbitrary on such an imprecise system."

Well, I agree with that. It seems that the vast bulk of schools get 2s or 3s. That is a problem with any unstructured 1-5 quiz, and a statistician will have to defend why it works, but I tend to think it does. Even if it seems arbitrary whether the unwashed masses get a 2 or 3, if enough people answer you will get an ordering of sorts based on ordinal rankings. It's possible that one can't defend this type of survey, but if so then the problem goes well beyond U.S. News into all types of market research.

The bigger problem, for me at least, is that even with the vast majority getting a 2 or 3, there's no reason to believe voters have sufficient information to adequately put schools into one of those two groups.

Even so, however, if I'm competing with 100 schools for a Tier 2, 3 or 4 placement, I would rather people be limited to putting me in the 2 group or the 3 group rather than the 2, 2.1, 2.2, 2.3, 2.4, 2.5....3.8, 3.9 group, because with the latter the likelihood of a bad outcome due to poor information seems greater.

Posted by: Michael Risch | Apr 9, 2010 4:38:11 PM

But the reputation scores appear to be in the general order one would expect, so perhaps the outliers serve their purpose, so long as they are distributed primarily due to rational concerns.

The problem is that I don't think the outliers are based on rational concerns.

The reputation scores do appear to be in a decent general order at the top, though they start to diverge more wildly as one goes downward. Consider schools ranked 41-80. They are in the top 40%, so technically, they should get 4s. Most have scores in the mid 2s. So they're getting mostly 3s and 2s. Whether they get 3s and 2s seems somewhat arbitrary on such an imprecise system.

Posted by: Daniel Solove | Apr 9, 2010 4:27:52 PM

Like I said, tough questions. I use 20% as an example - which is obviously not the case. The other example I give says ranks 40-100 get a 3, which is closer to reality - schools start scoring in the low 3's at about 25 or 30. So I suspect the breakdown is more exponential than linear, with a very large grouping in the 2-3 range.

At the top, however, I think your points make sense if everyone agreed, but perhaps not everyone does.

Perhaps someone gives Yale a 4 because of its 1L writing curriculum and give Harvard a 4 because the class size is so large. Not many, but a few, and perhaps enough to change the average a tiny amount.

Perhaps some do see a difference in quality of schools. For example, as an alum, I'm quite hurt that Chicago didn't receive the unanimous 5 score from you, as I suspect many others would think it deserves.

Perhaps someone went to one of the schools and thinks it's not quite as good as everyone says it is.

Perhaps some voters think the 5 and 4 blocks should be bigger than others think they should be.

Perhaps some gave all the top-10 schools a 4 and went down from there. Whether this is gaming or not may depend on intent, I suppose.

And I'm sure there is some gaming.

Should these be considered outliers? Probably, and to that extent it is true that outliers define the final average. But the reputation scores appear to be in the general order one would expect, so perhaps the outliers serve their purpose, so long as they are distributed primarily due to rational concerns.

Posted by: Michael Risch | Apr 9, 2010 4:09:17 PM

So we need to depend on the few who put Michigan in the second 20%. But are these raters really to be taken seriously? And is the difference in the number of raters who put Michigan in the second 20% as opposed to Penn in the second 20% really meaningful, or just due to a fluke?

Consider Yale, Harvard, Stanford, Columbia. Who doesn't put it in the top 20%? I think that any score of these schools that is less than a 5 should be thrown out because it is so patently off the norm.

In the end, except for schools really on the bubble (those toward the bottom of the top 20%), the average scores (throwing out all gaming or anomalies) should be as follows:

Yale 5.0
Harvard 5.0
Stanford 5.0
Columbia 5.0
NYU 5.0

(clearly the top 5 out of 200, the top 2.5%, any scores of 4 are just noise that should be discounted)

And your numbers above assume that the top 20% is the top 20. It's not. The top 20% is about 40 schools! So for all the schools -- Yale, Michigan, Cornell, USC, Emory, and American -- they all are in the top 20% of law schools, with only American being the one that any serious ranker would think might fall outside the top 40.

Posted by: Daniel Solove | Apr 9, 2010 3:51:23 PM

The comments to this entry are closed.