« Michael Heller's The Gridlock Economy | Main | Sports and National Pride »

Monday, July 14, 2008

How To Compare Value Added Across Law Schools

Last week, I made the case for creating a race to the top of the law schools on "value added" for students by using the U.S. News survey of academics – and lawyers and judges, for that matter -- to assess schools on this basis. The basic reaction I’ve gotten so far has gone quickly to the second-order questions about how exactly to do this, with skepticism about it even being possible. As Russell Korobkin (UCLA), a leading scholar on the rankings, once said: "no one has the foggiest idea how to judge objectively the quality of legal education across law schools." But Korobkin’s view, though I’m sure commonly held and reasonable, turns out to be close to 100% wrong.

First, let's state what should be obvious: the purpose of a legal education is to prepare students to practice law. So how to figure out which schools fare better and worse in achieving this goal? Ideally, what we would want is a large data set that included information about inputs, both the incoming credentials of students and educational inputs, and outcomes (bar passage rates, lawyer effectiveness, job satisfaction, salaries, etc.), and we could conduct an analysis that might be able to isolate the school-level impact on these outcomes.

In the absence of such data, what we can do -- and what many do in the undergraduate context, for example -- is assess educational institutions in part based on their use of "best practices" that are correlated with higher learning outcomes. One thing that we know, for example, is that increased "student engagement" is associated with better outcomes -- that is, graduates who are better prepared to practice law -- which is precisely what the excellent Law School Survey of Student Engagement (LSSSE) has been helping 148 law schools measure and work to improve since 2004. And student engagement is affected significantly by an institution’s programs and practices.

So overall, how do we compare "value added" for students? I thought surely someone had written on this in the law school context, but a search revealed little -- so unless and until someone points to other references, let's start with the following proposition:

The value added of one school versus others for a student is a product of four basic elements:
- the relative educational quality (60%),
- quality of and participation in extra- and co-curricular activities (10%),
- quality of the career advising and assistance (15%),
- the alumni network, present and future (15%).

Are these the right elements? I would welcome thoughts. But how did you get these percentages? I made them up; let’s talk about how they should be different. And yes, they’re going to be somewhat rough – but remember, all we’re trying to do is reason our way to a 1-5 assessment and relative ordering in a particular market.

Okay, but how are we even going to begin to deal with the big enchilada, relative educational quality? Again, please point me to existing research if you know it (I didn't find much on law schools specifically), but let's start with the following proposition on relative educational quality:

The relative educational quality of one school versus another for a student is a product of four basic elements:
- the relative pedagogical skill of the faculty (20%);
- overall classroom experience and student engagement (20%);
- strength of the curriculum, particularly the degree to which it adequately prepares students to practice law in the 21st-century, using "best practices" in legal education identified in the report by the same name and the Carnegie Foundation report (40%); and
- efforts to prepare students for the bar (20%).

Note that the relative weight of this last element probably ought to vary considerably between schools with high and low bar passage rates.

I’m sure this can be strengthened in various respects -- for example, the first two elements clearly overlap -- and I look forward to your views (including and especially students and lawyers). What do you think?

Posted by Jason Solomon on July 14, 2008 at 09:55 AM in Life of Law Schools | Permalink

TrackBack

TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a00d8341c6a7953ef00e553b7ba828834

Listed below are links to weblogs that reference How To Compare Value Added Across Law Schools:

Comments

I concur with Geoff in that assigning any system of weights to multiple factors almost always produces an inscrutable and bizarre result. Every time I've produced composite rankings from several factors I've always wound up feeling that they were more trouble than they were worth. For the sake of simplicity, and unless you have really good reasons to do otherwise, it might make sense to just give all factors equal weight and leave it at that.

What's more important is that any overall list shows the scores in each category for each school, and that you provide transparent access to sorted listings in each category. That way users can quickly drill through to what factors they consider most important. Composite rankings have the most value in simply bringing factors together for easy review, and ought to discourage the making of fine and specious distinctions between schools with similar overall scores.

Posted by: Michael Shaffer | Jul 15, 2008 1:01:20 PM

Suppose X has the goal of obtaining a prestigious clerkship followed by a big firm job, entry into the DOJ honors program or working for an international NGO. Despite the "better" educational program at Georgia, I would advise X to go to Yale. Why? The networking opportunities are such that X is much more likely to obtain her goals by attending Yale - educational quality be damned. X is a smart cookie she can pick up whatever she needs to learn while in practice, right?

I would absolutely concur in your advice given the choice you posed. But I'm not convinced that programs high in theoretical content can't also have high practical effect. In fact I think that the two go hand in hand, even though I know many would argue that they can not or should not.

Also, most candidates will wind up choosing not between schools with radically different standings like Yale and Georgia, but between schools with similar networking opportunities and such (similar "prestige," for lack of a better word). Candidates who get into Yale will also likely have Harvard and Stanford to choose from, as well as Columbia, Chicago, NYU, et. al. with large scholarships. So they would have much use for a genuine quality measure for the purpose of breaking ties and sorting out peer schools from one another.

A genuine quality measure, and one that was widely relied upon by candidates, would inspire schools even at similar and very high levels of "prestige" to compete with each other on "quality." And a useful quality metric would allow candidates to make much more informed cost / benefit based choices between schools that are otherwise hard to distinguish with existing rankings. And that would have a direct effect in accomplishing Korobkin's suggested goal of using rankings to encourage schools to produce a public good (high "quality" education) which might not otherwise be less available than we would like.

Posted by: Michael Shaffer | Jul 15, 2008 12:47:46 PM

Lou and Michael, very helpful comments, thanks. On relative weight of educational quality vs. networking/etc., it might depend on the relevant market. Your example, Lou, of Georgia v. Yale is an interesting one, but from the perspective I'm taking here, is kind of two different markets -- I'm sure there are students who make such a choice, I just don't know how many, and would be inclined to think that they're sufficiently aware of the tradeoffs that we shouldn't worry too much. Michael, if by chance you still have the rejected comments (it wasn't me!), pls do email them to me ([email protected]) if you have a chance. Thanks again.

Posted by: Jason Solomon | Jul 15, 2008 11:38:39 AM

I like the idea of using student engagement metrics, especially if they correlate with outcomes. If that's the case, then you're both measuring something proportional to actual value (from the perspective of students) and encouraging schools to take actions we would actually desire in order to move up in the rankings. The LSSE data also has the virtue of being relatively comprehensive, and being available today. It seems we could easily expand it to cover all schools, which may not be easy to do with certain otherwise attractive surveys and rankings out there. If we think there is overlooked value, then it's critical to cover all schools. Measures which only give the "Top X" in a category seem, if anything, even more prone to amplifying the echo chamber effect of current rankings.

I am not sure if NALP or LSAC include MBE scored in their recent and ongoing longitudinal studies. If they do, and if you can get paired LSAT and MBE scores out of their public use datasets, then that would allow for some really interesting comparisons of inputs vs. outputs. As Jim G suggests above, and as noted in Bill Henderson's paper linked there, this would be about as close as we're going to get to a genuine measure of educational effectiveness, with regard to preparing students for passing the bar and going into practice. This is certainly not the only dimension of "quality" that we should measure. Fut as a student I would argue that doing a good job in this respect is at least a necessary condition for a high quality program. This sort of metric would allow schools both to get credit for instructional efforts that go overlooked today, as well as to attract students who value those efforts.

I posted some other comments earlier (yesterday), which were flagged as "comment spam," by TypePad (I assume because I included several links in the comment body). I guess they didn't make it through the queue.

Posted by: Michael Shaffer | Jul 15, 2008 11:25:26 AM

What "value" is added will probably depend upon whom you talk to. You rank alumni networking at 15% of the overall value added while ranking educational quality at 60%. This seems off to me if a student's goal is to capture a fancy job after law school. (This is the overt goal of most of my students.) Alumni networks and reputation matter much more.

For example, suppose Yale's academic program had little to do with the practice of law (or passing the bar exam), having much more to do with studying theories of justice as if it were a PhD program in philosophy. Thus, we could posit that the educational quality relative to preparation for practice was not the best at Yale. Also suppose that Georgia Law had the nation's best academic program in terms of preparation for practice and passing the bar. Student X gets accepted into both schools. Suppose X has the goal of obtaining a prestigious clerkship followed by a big firm job, entry into the DOJ honors program or working for an international NGO. Despite the "better" educational program at Georgia, I would advise X to go to Yale. Why? The networking opportunities are such that X is much more likely to obtain her goals by attending Yale - educational quality be damned. X is a smart cookie she can pick up whatever she needs to learn while in practice, right?

I previously taught at a top business school, where I observed this behavior on steroids. At the Michigan B-School, MBA students are interviewed for permanent jobs in their second week of the first year. Yes, that is correct, in the second week of the year. There, the networking potential is everything from the students' point of view (assuming the overarching goal is to gain a fancy post-MBA job). Course-work and other educational programs had no affect on employment outcomes.

Now, the b-school model is perhaps extreme. But all this is to say, if your goal is to present a meaningful guide for students (a goal I find worthy), it is probably best to do some research into what "value" the students want added. Educational quality may not matter much to them. (Or if it does matter greatly, it may matter only indirectly insofar as it affects networking opportunities.)

I am enjoying this series and thank you for starting the discussion.

Posted by: Lou Mulligan | Jul 15, 2008 10:30:37 AM

Geoff, thanks for the comment -- totally agree on predicted v. actual bar passage rate, and will incorporate that in proposal.

I tried to indicate (but appear to have failed) that the weight of bar passage ought to vary considerably depending on what market you're in. So Columbia v. NYU, who cares, everyone passes, but lower down, should matter a lot.

On "multiple factors," I'm not proposing a separate ranking. I'm just trying to help people fill out the existing U.S. News survey that says: give a school a 1, 2, 3, 4, or 5 based on quality of their program. Right now, my impression is people do it on gut/random impressions (when not being strategic): couldn't be more arbitrary.

On extracurrics, I'm not wedded to it. But I think research on "student engagement" shows that things like journals/moot court can help learning outcomes. Maybe should count less.

p.s. Just read and liked Recklessness, by the way. Nice work.

Posted by: Jason Solomon | Jul 14, 2008 1:20:12 PM

Great post, and great series of posts.

The simplest way to measure "value added" would seem to me to be to calculate, based on each school's entering student LSAT and GPA data (for FT and PT students), the "predicted" bar failure rate for each school in its lead jurisdiction. E.g., one would expect a school with a GPA/LSAT average of 3.8/170 to have a __% failure rate. Then, for every school, compare its actual failure rate to its predicted failure rate. The schools with "value added" would have actual failure rates below the predicted level; the schools that were "value destroying" would have actual failure rates above the predicted level.

This is a simple approach, but of course flawed in that it presume bar failure rates are a suitable proxy for educational quality. Law school is about more than just passing the bar, although for most law schools, bar passage would seem to be an outcome of quality. Even better would be for the NCBE to release actual scores on the multi-state portions of the test - students' actual scores (rather than the "yes" or "no" of passing/failing) could be compared to predicted scores.

The most serious flaw in any ranking system that purports to employ multiple variables is that the assignment of a particular weight to any variable is arbitrary and indefensible. This is why the Leiter approach and SSRN rankings - which rank schools separately along different dimensions - have so much more meaning and rigor than US News (in spite of the well documented limitations of citations, placements and downloads as measures). Why should "reputation" among academics be of a certain weight? Or employment at graduation? Your suggestion that we rank educational "quality" using a similar combination of weighted factors perpetuates this problem. Why is practice preparation twice as important as the bar? If you can't pass the bar, your practice skills are pretty useless. Why do extracurriculars matter at all? (Given that most are run by other students, rather than the institutions/faculty). But attempting to rank schools separately on any of these indicators would certainly provide an additional perspective on law school quality.

Posted by: Geoff | Jul 14, 2008 12:49:04 PM

Bill Henderson wrote a paper about two law schools whose output greatly exceeds their inputs.


http://papers.ssrn.com/sol3/papers.cfm?abstract_id=954604

Posted by: John Steele | Jul 14, 2008 11:21:40 AM

I see a couple of problems with this approach. First, it looks a little circular, at least in probable application. The ultimate question is the "quality" of a school—one possible measure of which is how well the school prepares students, or the "value added." But your approach tries to measure that value added largely by the "educational quality," which brings us back to the hard-to-answer question: just what is "quality?"

My other complaint is that your value-add calculation measures method, not results. It presumes some "right" way to prepare students, and rewards any school that follows that one path. If we're comparing how well a school adds value, the most direct approach would be to find a way to measure the difference between incoming and outgoing "values." That may or may not be possible (we have an obvious nationally-standardized measurement of incoming students; outgoing stats are harder to compare between schools), but I think it's the right way to frame the question.

Posted by: Jim G | Jul 14, 2008 11:20:36 AM

The comments to this entry are closed.