« Heller, IP, and the interpretive significance of preambular language | Main | Umpires, Judges, and Interpretation »
Tuesday, July 08, 2008
What are we voting about in US News?
The element of the US News rankings which is far and away the most heavily weighted (25%) is the school's quality assessment from law professors. Right now, it's unclear what criteria faculty use individually, and in the aggregate, the assessment simply replicates previous US News rankings, as Indiana's Jeffrey Stake has shown. We can do better than that.
So as a future (I hope) US News voter, I’m trying to figure out what exactly I’m voting about – and what audience I’m voting for -- in assessing the quality of a school’s “program,” as U.S. News appears to instruct.
I guess the way the rankings work is this: they're designed to help employers figure out from where to hire, and then the idea is that students would use the rankings derivatively -- that is, not as an actual sign of the quality of the school, but as a sign of what employers will think of the quality of the school and therefore how it will affect their career prospects. Russell Korobkin of UCLA, whose 1998 essay on rankings led in part to him giving the keynote address -- framed as a response to Cass Sunstein and Richard Posner's contributions -- at the excellent Indiana symposium on rankings a few years ago, has called this the "primary purpose" of rankings: to "coordinate the placement of law students with legal employers." This seems right to me.
If that's the case, then when I fill out the US News survey as (again, I hope) a newly tenured professor at the University of Georgia in a few years, I’m going to be evaluating each school's "program" for future employers directly, and indirectly for prospective students.
So what do employers want to know, and how can I help?
Let's look at the whole picture. Overall, U.S. News basically provides information in four categories (as described by them): quality assessment, student selectivity, faculty resources, and placement success. The survey we fill out is most of the quality assessment piece, along with the practitioner survey. Of the rest, student selectivity must be in there to give employers some rough sense of the graduates' abilities before law school, placement success could be either designed to give new employers a signal about student ability or to give students a sense of how they're going to do in the market. Then there's faculty resources, which most think is noise or worse, but must be designed to get at some measure of the quality of the education itself.
So given what employers and prospective students are told from the other three categories, what "program" ought we be assessing when doing surveys? I think the choices are: our program of knowledge production, lawyer education, or both. Well, what would employers and prospective students want to know? Certainly, if employers already have some sense of student abilities coming in from the numerical credentials, they would probably want to know, as Nancy Rapoport has put it, about the "value added" that the particular school provides the future lawyers they're considering hiring. They want to know about the quality of the educational program, not knowledge production.
I can't imagine why employers (or most prospective students at most schools, for that matter) would care about the quantity and quality of knowledge production at a particular institution. I care, society ought to care, universities and law schools ought to care, but employers don't and shouldn't care much -- and I don't think anyone has seriously made a case otherwise.
Well, I've convinced myself of the best way to do this, and it wasn't a close call -- evaluate the quality of the education "program," and use that for 100% of the score I give each school. How exactly I do that is a separate matter, but I'll save that for another day. Scholarship won't count at all.
But one thing does give me pause: I've just gone through this quick and dirty exercise of arguably Dworkinian interpretation -- trying to decide what to do, how to decide, by figuring out my role and trying to bring some coherence to the social practice of voting for the US news rankings. But maybe this was a silly exercise. Maybe the easiest thing to do, is just to look to the leading voice in the legal academy and the media on how to make the US News rankings better.
Maybe I should just ask myself: what would Leiter do? Professor Leiter, if you're out there and have a chance, I'd love to hear from you and others (including students and lawyers who hire law school graduates!) -- and even if not, I'll try to answer the question tomorrow in my next post.
Posted by Jason Solomon on July 8, 2008 at 12:16 AM in Life of Law Schools | Permalink
TrackBack
TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a00d8341c6a7953ef00e55386fee38833
Listed below are links to weblogs that reference What are we voting about in US News?:
Comments
I'll definitely read your follow-ups. If the task of informed, responsible voting could be done, a group effort is the likely the only way.
Posted by: Joseph Slater | Jul 8, 2008 5:35:42 PM
Joe, I'm glad you're curious -- that means, I hope, that you'll keep reading into next week when I take a first stab at the question of how to do it. Short answer, I think: some institution or group of people needs to aggregate the data -- do what Leiter has done for scholarship, but for education. A bunch of work, but definitely possible.
Posted by: Jason Solomon | Jul 8, 2008 3:05:27 PM
It's not just that voters don't agree on what the criteria are. It's also that even if there were broad agreement, or even if one person decides on what he or she thinks is the best way to measure, I can't see how any law prof. -- presumably engaged in other, more important work -- could get enough factual information about enough schools to do a responsible job.
I'm not scorning Jason for thinking about this seriously; that's the responsible thing to try to do. But re this comment of his on how he will apply his criteria, color me curious: "How exactly I do that is a separate matter, but I'll save that for another day."
Posted by: Joseph Slater | Jul 8, 2008 2:37:25 PM
I'm the wrong Brian, but here's a couple of thoughts.
I really like the methodological move of exposing the fact that there is no widespread social agreement about what the criteria for ranking are, and the Dworkinian interpretive move that follows is also nice.
I'm not so convinced by your application of it. The drafters of the legislation -- er, rankings -- may have intended one use for one audience, but in practice there are several audiences in the community who put the rankings to different ends. For instance, the interpretive community of which you and I are members uses U.S. News as a rough proxy for law journal quality. Home-faculty scholarship is at least an input into that (as it is, I'd argue, into the "value added" question you address). Others (although I like to think I am not one of them) use the rankings as an heuristic for social status, which in principle derives directly from the quality of scholarship. Since these are the practices of my community, when I evaluate the best interpretation of the rankings on the "fit" dimension, I have to consider these uses, too.
(And, actually, we could have an antecedent argument about whether the scholarly legal community is part of or distinct from the community of other legal employers. I don't recall what, if anything, Dworkin thinks about interpreters who try to fit text to the meaning that would be assigned by another community to which they are not a member.)
Posted by: bdg | Jul 8, 2008 10:20:21 AM
The comments to this entry are closed.