« More on partisan media | Main | The Truth on the Merger Guidelines »

Monday, October 26, 2009

U.S. News Surveys Out; Info Available Here

Late last week, law professors everywhere -- four at each school -- received the annual U.S. News survey asking them (as well as lawyers) to assess the quality of every JD program in the country on a scale of 1-5. In ranking law schools, U.S. News considers "input" measures like the LSAT score and GPA of incoming students, and "output" measures like bar passage and employment rates. In between, and also part of the formula, is an attempt to assess the "value added" by a particular school relative to others. To get at this, U.S. News primarily uses this survey. The idea is "ask the experts," despite the fact that few law professors or lawyers know much of anything about more than a handful of schools. And this survey accounts for 40% of a school's total score, dwarfing any other factor. 

The conventional wisdom is that law schools pay too much attention to the U.S. News rankings. But I think as to most law professors and lawyers, the conventional wisdom is exactly wrong: rather we haven't paid enough attention, and should pay more. That's the premise of the project that fellow Prawfs guestblogger Dave Fagundes and I started last year, Race to the Top, which aims to leverage attention to the rankings to help focus attention on the educational quality of J.D. programs. This kind of focus also makes sense in light of the ABA's new, much-welcomed focus on outcome measures and assessment in law schools.

In part for U.S. News survey respondents to use, we've aggregated data on our website that is relevant to the educational quality of each school.  The data is broken down into five principal categories: student engagement, curriculum (focused on experiential learning), use of best practices, student-faculty contact, and legal writing programs.  Another great resource, using some of the same data, are the charts that TaxProf Blog's Paul Caron did last week using the valuable student survey data from The Princeton Review. The one comparing the Princeton Review data to U.S. News can quickly help identify "underrated" law schools, which should be given a "4" or "5" in the survey.

Prawfs readers: if you know law professors (dean, assoc dean for academic affairs, chair of hiring committee, most recently tenured), or law firm hiring partners and recruiting coordinators (who may not have received it just yet), that would have received this survey, it would be great if you sent them the link to this post, and encouraged them to use the data in filling out the survey.  We'd welcome suggestions for the project going forward, and I look forward to talking more with many of you about these issues back here at Prawfs, and at our own website, in the months ahead.

Posted by Jason Solomon on October 26, 2009 at 10:07 AM in Life of Law Schools | Permalink

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8341c6a7953ef0120a677641a970c

Listed below are links to weblogs that reference U.S. News Surveys Out; Info Available Here:

Comments

Jason (if I may), I had a quick question about the factors that you consider in your study. Do you take scholarship into account as a factor affecting a law student's experience with a law school? I did not see a separate category for it, though I did see a few points of possible connection within some of your other categories -- student engagement and curriculum primarily.

I understand that the US News does not assess scholarship as its own category, though presumably the rankings by professors themselves that you discuss in the post will play a role there. If you don't consider scholarship per se, I am curious as to why. This has seemed, to me at least, to be a major failing of the US News approach, though I do not often hear reformers of the existing rating systems emphasize the importance of scholarship. In fact, what I've more commonly heard is skepticism that a teacher's superior, or inferior, scholarship substantially affects a student's educational experience at all. What's your sense on this question?

Posted by: Marc DeGirolami | Oct 26, 2009 1:15:52 PM

Marc, thanks for the comment and questions. On scholarship, that's right -- it's not included because of the research indicating that greater productivity has no correlation with an improved educational experience for students. See, e.g., Ben Barton's study here. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=913421

I think measuring and rating schools for scholarly productivity and impact is hugely important (and kudos to Leiter and others for taking it on), but it just doesn't have anything to do with U.S. News, which is for prospective students and employers, and focuses on the quality of the JD program.

Posted by: Jason Solomon | Oct 26, 2009 1:52:00 PM

It strikes me as naive (if not worse) to take the Princeton Review student survey data as in any way 'valuable' or useful. I can't really believe that you are suggesting that people consider the Princeton Review information as part of filling out the reputational survey form for U.S. News.

I wholeheartedly agree with the general project to improve/inform the U.S. News 'reputation' score. There are many ways one might do this. One would be to provide evaluators with hard 'objective' numbers for each school regarding clerkship placement, recent grad employment in Vault 100 law firms (or whatever), bar-passage rates, number of current public defenders/AUSAs who graduated from the school in the past 10 years, number of recent faculty publications in top 50 Law Reviews (or whatever), percentage of students who complete the JD, transfer in/transfer out ratios, and so on. One might even want to try to take into account factors such as student engagement, student-faculty contact, and the quality of legal writing programs, to the extent that these could somehow be measured and evaluated.

But using the Princeton Review student survey data is NOT a good way to do this.

First, the surveys are not administered in remotely the same way at every school, nor do all schools take them equally seriously. This leads to massive and implausible discrepancies in the data--are we really to believe that students at Villanova 'study' 7.5 hours per day while students at Harvard study 3.6 hours per day and students at Yale study 1.5 hours per day, as the surveys purport to report? Or that students at Loyola-Chicago study 6.6 hours per day, while students at U. of Illinois study just 1.5 hours per day? And if the numbers are obviously off with respect to something as simple as study hours, there are real reasons to think things will be worse with other categories.

Second, Princeton Review does not release the response rate for each school; it is quite possible (if not likely) that it varies widely from school to school. (Perhaps explaining the above variation.) As a for-profit venture, the Princeton Review, like U.S. News, has an incentive to manipulate things so that the rankings both (a) change from year to year and (b) differ from other available rankings. This kind of incentive means they are quite likely to fiddle with the data collected, be unconcerned about response rate issues, alter the categories surveyed, and so on so that they reach results compatible with (a) and (b). Given that they do not publish response rates, details of survey methodology, etc., I think a great deal of skepticism is in order.

Third, the vast majority of students filling out the surveys (all students who haven't transferred) have only the experience at their one law school to go by. As a result, surveys of this sort will be little more than an indication of the extent to which (1) the students want or feel the need to boost the image of their school or (2) their expectations have been met. Regarding (1), it is quite plausibly a function of whether they perceive their school as an 'underdog' along some dimension or not; this seems particularly likely given that some of the biggest 'over-performers' compared to their U.S. News ranking were 'religious-themed' schools such as Ave Maria, Regent, Pepperdine, and BYU. Regarding (2), it should be obvious that students might be quite happy with the education they've in fact received at school A (and fill out surveys accordingly) just because they expected their education at A to be terrible. And the same thing can happen, mutatis mutandis, with students who have high expectations going in... It might be quite hard for the very best schools to live up to the expectations students have for them, without this meaning that the education at those schools is worse than schools where expectations start out very low. None of this tracks the quality of the education at the institutions.

Fourth, and perhaps more cynically (as a recent law school grad): why should we believe that, even if accurate, students' views about how 'interesting' their professors are has much to do with the educational quality of the JD program? As we've just learned, little kids find Baby Einstein 'interesting'--that doesn't mean it's educational...

And this is plainly NOT a case where 'something is better than nothing.' We have no reason to believe that the Princeton Review survey data correlates in any way with any aspect of educational quality. We don't even have reason to believe it correlates with student satisfaction with their JD institution, given the inconsistent way in which the data is collected. As it is said: garbage in, garbage out.

Posted by: Anon | Oct 26, 2009 3:36:02 PM

Anon says: "I can't really believe that you are suggesting that people consider the Princeton Review information as part of filling out the reputational survey form for U.S. News" and "this is plainly NOT a case where 'something is better than nothing.'"

I disagree -- but I think Anon makes some good points about the limitations of the PR data, and to the extent my post suggests that we think PR should be used as the exclusive reference point, that was misleading and wrong. I was simply saying that if you're going to spend 5 min on filling out the survey, looking at Caron's charts is not a bad way to go.

His PR v. US News chart points to schools like Texas Tech, Mercer, and BYU as underrated on the survey -- that's consistent with other data I've seen and anecdotal evidence, and relying on this as a quick proxy is far better than the way most people fill out the survey.

On all the good suggestions for output measures, the problem is no control for inputs -- need a way to get at "value added" by one school relative to another.

Posted by: Jason Solomon | Oct 26, 2009 5:07:23 PM

To echo a few points in Jason's response, the project certainly does not seek to mechanically ratify the results of _any_ ranking of law schools. The PR data have been much maligned, and I'm more skeptical of them than Jason, but I agree that they should be out there simply to enrich the discussion about how we do (and should) evaluate law schools. This thread is an object lesson--agree or disagree about PR data quality, we can all agree (I think) that it's good to have a robust discussion about their pros and cons. What we want to avoid, and what I hope the project will discourage, is simply using the USNWR rankings as a mindless reiterative process, ranking schools one year simply by reference to where they were ranked in a previous year.

Posted by: Dave | Oct 27, 2009 12:25:34 AM

Jason, thanks for your response. I had a look at Professor Barton's study and I do not agree that it stands for as strong a proposition as your claim that scholarly productivity and impact has no effect on the quality of legal education. Or, if it does stand for that proposition, then I do not agree that it has proved it. A few points.

Barton's study looks at student evaluations of teaching, focusing especially on the common evaluation question dealing with "teaching effectiveness" and its variations. Barton acknowledges, however, that there are many reasons that a student rates a professor highly. Some of these have been shown in other studies to be improper reasons: biases based on race, gender, and physical attractiveness, for example.

Even if one sets these sorts of objections to the side, it is not at all clear to me that "teaching effectiveness" in the minds of students filling out an evaluation necessarily correlates precisely with quality of instruction (which, as you rightly say, is what students and employers are concerned to measure). There are all sorts of reasons that an instructor might be deemed effective. Some of these might have to do with the substance of the education itself. Others might include energy, engagement with student concerns and questions, light reading assignments, buying the students dinner at the end of class, and so on.

Now, Barton has answers for these sorts of critiques of the possibility of measuring teaching effectiveness: among them, that student evaluations are the best we've got, and that it is wrong and patronizing to claim that students are incapable of measuring effectiveness.

But even if we put these disputes to the side (as I wish to), it is surely true that some, perhaps even many, students may not know whether and how the teacher's knowledge and expertise in a field is affecting their learning experience. For many, it will be their first introduction to a subject. And even if they do have a sense of it, that expertise may very well be overcome by other factors (how nice the professor is, or how accessible, or how clear with the material that he or she does present) when it comes to grading 'effectiveness.'

Another point is that Barton's is only one study, in which 19 law schools were studied. While the results are interesting and the study was well done (so it seems to me, at least), it seems very much too soon to leap to the conclusion that scholarship has absolutely no relationship with teaching. Many more such studies would be needed, over time, measuring many other schools, and looking to different sorts of inidicia, to make that claim definitively.

All of this is to say that "teaching effectiveness" may well be "the best we've got" to measure whether scholarship affects one's learning experience, but that's not good enough to make the claim that scholarship (better or worse types of it) is completely irrelevant to one's learning experience.

Finally, I wonder how many prospective law students would agree that they simply don't care at all whether their professors are cutting edge experts in their field. My (completely untutored and beginner's) sense is that many would find it relevant information in making their choices.

I'd be curious about your thoughts on any of these points.

Posted by: Marc DeGirolami | Oct 27, 2009 9:49:48 AM

Marc, thanks again for your response. Mine here is an overly quick one, but think this is an important conversation to continue: my impression is that throughout higher education, there's a consensus among the existing studies that greater research productivity is not correlated with better learning outcomes for students. My apologies for not having cites on hand -- maybe others do.

A higher-ed proposal, similar to our project, was done by a think-tank called Education Sector. See their proposal here, http://www.educationsector.org/usr_doc/CollegeRankingsReformed.pdf -- the thrust of the proposal is on pp. 21-23 of the pdf. Key is the National Survey of Student Engagement, and its law-school equivalent (LSSSE) -- but hindered by data not being public. That's why we use Princeton Review at the moment.

I think it would be great if others joined Leiter/Yelnosky, and filled in the gaps, to do a more comprehensive comparison of law schools' scholarly productivity and impact.

Posted by: Jason Solomon | Oct 27, 2009 11:40:48 AM

Post a comment