« Spousal Hiring, Ethics, and the Theory of the Family | Main | Journal of Law, volume 3, number 1 »

Thursday, May 09, 2013

Interview with Brian Dalton about Above the Law’s New Rankings

You’ve no doubt heard about the new Above the Law Top 50 Law School Rankings. But have we really had the chance to scrutinize them to death? ATL itself has done the job for us, to some extent, with self-criticism here and here. But perhaps you still have questions? We here at Prawfs did, and ATL’s rankings guru Brian Dalton was kind enough to answer them. Brian is a graduate of Middlebury College and Fordham Law. He joined ATL’s parent organization Breaking Media in October 2011 after spending seven years at Vault.com, most recently as Director of Research and Consulting. Before that, he was, among other things, an associate at a Manhattan law firm, a French teacher in Brooklyn, a Peace Corps volunteer in Mali, and a security guard at a waterslide park in Albuquerque, NM. Here is our discussion.

Why did you use only SCOTUS and federal court clerkships? Is there data out there on state court clerkships?

For the federal clerkships, solid data was available and we thought it made sense to use it to augment the "quality jobs" metric. The SCOTUS clerkship stat really serves to differentiate among the very top schools -- it's not much of a factor outside the top 10-15. State court clerkships are accounted for in the overall employment scores.

It looks like you double-count federal clerkships -- both as an individual factor and in quality jobs as well. Why?

We didn't-- federal clerkships are a component of the "quality jobs" score and federal judgeships is a standalone metric.

My apologies -- I read the "federal judgeships" category as a "federal clerkships" category. Is there data on state judgeships or state supreme court clerkships? If so, did you consider using that data?

We did consider state court clerkships, but the data sources all lumped "state and local" clerkships together, so we stuck with federal. We did not consider state judgeships, but that doesn't rule out our looking into using them as a factor in the future. We feel we've created a useful ranking but we know of course it can be improved and we are benefiting from all the feedback we've received so far.

How and when did you conduct the ATL alumni survey? What was your overall response rate? What was the average number of responses per school? Do you have max and min numbers (school with the highest number of responses & school with the lowest number of responses)?

We've been conducting the survey since March of last year, we typically promote it through research-based posts on the main ATL page. We set minimum thresholds based on the size of the school and all 50 ranked schools exceeded the minimum. We've received about 11,000 responses to date. The lowest threshold is about 40 responses, though many schools have hundreds. [Ed. – If you want to take the survey, you can go here.]

In the factor on education price, did you just use the sticker price for tuition, or did you take the scholarship discounts into account? Did you also include living expenses? Where did you get this data?

No scholarships or aid were taken into account. COL was accounted for only in the case of schools where the majority of employment placement was in the local market. COL data came from 2012 Q3 Cost of Living Index Council for Community and Economic Research (Published October 2012)

I'm a little confused about how you used the COL information. Did you use it to modify the cost of tuition, or did you incorporate a set of expenses meant to cover living expenses (as a lot of folks do in coming up with approximate debt levels)?

The "Cost" metric in our rankings is not equivalent to "tuition," it's the non-discounted, projected cost for the span of a student's debt-financed legal education. It includes indirect costs (room/board/books) as well. We used COL data to modify this projected cost for schools where most grads work in the local market.

Dan Rodriguez of Northwestern has already questioned your exclusion of "JD Advantage" jobs. What do you think about his criticisms?

Dan Rodriguez makes many fair points and we will always be looking to refine our approach, but I have to assume parsing out the "good" non-legal jobs from the rest would require a level of engagement from the schools that I can't imagine is forthcoming. Over the past year, we've spoken to many deans of law schools about how to make that distinction and we heard many interesting ideas, but I have doubts about whether anyone will share data with us, but we will ask next time around.

Can you take us through the individual factor scores for one of the schools and show us how you came up with the overall numerical score?

The perfect overall "ATL score" is 100. Each school is awarded a maximum number of points based on the weight of each metric (a maximum of 30 points for highest Employment Score, 15 points for lowest cost etc.) The points are awarded on a sliding scale from highest to lowest. Those points add up to the total ATL score seen on the rankings table.

Did you feel the need to jigger with your factors in order to get the traditional top schools (Yale, Harvard, Stanford) on top? A cynic might say the SCOTUS and federal clerk scores were ways of getting the traditionally high-ranked schools up there. I mean -- there's no way that Lat is allowing out a ranking that doesn't have Yale on top, right?

A cynic might say that. But we didn't game the various weights in order to achieve a specific result. I would suggest that the traditional top schools really are the top schools and any sound rankings approach will confirm that.

How many schools did you look at in making your rankings? Why did you decide to cut it off at 50? Do you have rankings for beyond 50?

We have rankings for over 100 schools. We looked at about 150 schools. We made an editorial judgement call to cut it off at 50 -- we felt that there are only so many "national" schools for which meaningful comparisons can or should be made.

Did you use the USNWR rankings to come up with the 150 or so that you ranked, or did you use another metric?

No. Since last summer, we've had our own directory of schools in our Career Center with about 150 schools profiled, so that was the starting point.

One somewhat vague set of nuts-and-bolts questions -- how exactly did you transform the data into numerical scores? So, for example, the federal judgeships score: How does a school get a score of 7.5 -- by having the highest score? What if I came in second? What if I came in last?

If you came in first place you would get the maximum number of points awarded in that metric (so, for federal judges you would get 7.5 points). If you had no federal judges, you would get zero points for that category. Anyone in between is awarded points between zero and 7.5 in accord with their rank.

So just to pose a hypothetical, let's say you had ten schools with the following number of federal judgeships:

  • A: 30
  • B: 27
  • C: 20
  • D: 18
  • E: 15
  • F: 10
  • G: 6
  • H: 5
  • I: 0
  • J: 0

How would you score those?

Let's assume the numbers you provided above were being used for one of our metrics for federal judges or SCOTUS clerks (weighted 7.5%). The points awarded would look like this:

  • 7.5
  • 6.75
  • 5
  • 4.5
  • 3.75
  • 2.5
  • 1.5
  • 1.25
  • 0
  • 0

This is not an entirely accurate picture because we work in percentages rather than raw numbers (i.e. % of all federal judges are from school X), but this gives you an idea of how the scoring system works. The amount of variance between the scores directly relates to how "far apart" the raw numbers are.

Is there any data you would really like to get for next year's rankings?

We would really like to know the default rates for federally backed student loans for individual schools. Yet now the default data is segmented only by something called an “Office of Postsecondary Education Identification Number” and nothing else, so the stats for individual graduate schools within a university system do not exist, at least as far as the DOE is concerned. In other words, for the purposes of tracking the default rates at, for example, Harvard, the DOE lumps all the alumni of the business, medical, law, divinity, and all the other grad schools into the same hopper, with no way to untangle the data.

Elie Mystal said that "Next year will be even better!" Anything you've already decided to do differently for next year's rankings?

No decisions have been made, we are still sorting through all of the feedback.

Posted by Matt Bodie on May 9, 2013 at 01:13 PM in Life of Law Schools | Permalink

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8341c6a7953ef019101f48a7c970c

Listed below are links to weblogs that reference Interview with Brian Dalton about Above the Law’s New Rankings:

Comments

Why are we discussing their rankings as if they have significance?

Posted by: Anon | May 9, 2013 9:25:17 PM

Matt (or Brian, if you're watching for comments), were the folks at ATL aware when they put together their methodology that US News publishes "average debt load at graduation" rankings? They are here: http://grad-schools.usnews.rankingsandreviews.com/best-graduate-schools/top-law-schools/grad-debt-rankings . One of the ATL "self-criticism" posts claimed that average-debt data is not available.

Also, I like the idea of working default rates into the methodology, but I doubt that there will be many actual defaults, other than at the lowest-performing schools. It would probably be better to get both default rate data and data on what percentage of each school's graduates are on income-based repayment plans (ideally, both one year and five years out). The latter would be a reasonable proxy for the cost-benefit of a particular school's degree.

Posted by: Scott Bauries | May 13, 2013 6:38:03 PM

Certainly ATL's outcome-based analysis offers more to the prospective student than USNWR's, but not without its faults. Many have been mentioned, but there's one I haven't seen that needs to be.

ROI is important, not price!!!

Put in other terms: a ranking with price legitimizes comparison between a Timex & a Rolex, whereas these two belong in different categories. The onus is on the consumer to decide what each good's value is to them for being separated with varying amounts of their money, with rankings based solely upon quality and the caviat of price (important nonetheless).

Price doesn't belong in the equation, but even so, sticker looses relevance when we know % receiving discounts & median rates. It's obviously not uniform per student, but an average cost is feasible to find. A simple example: students' bills at U of Illinois will average around $19K-$24K (99.4% receive scholly, median = $19K off of the $38K-$45K sticker).

Using this calculation and integrating the NALP salary figures for graduates, you can get a rough bang-for-the buck figure on the value of that school's diploma. Regardless of whether it's the grad, the dad, the rich uncle or Uncle Sam footing the bill, a degree is a financial investment (despite what Lawrence Mitchell may have us believe) upon which results are judged in $. If someone goes through an educational process of 20 years simply "to enhance the non-pecuniary benefit to society" or some such garbage, they deserve a swift kick to the groin.

Sorry, back to the substance:
A couple of ((Sticker - Discount)*3 + (NALP * Years of practice)) calculations later, assuming that the graduate will be in the labor force for more than 5 years, you'll find that Yale at $100K is still a better investment than Cooley at 100%.

Posted by: Anonymoose | May 20, 2013 12:56:50 AM

Post a comment