Wednesday, September 21, 2011
What is the Point of Law School Grading Curves if They're Not (More) Fixed?
The way I see it, there are basically 2 points to having grading curves in law school:
(1) We want a forced distribution of grades so that relevant players can distinguish among students in the top, middle, and bottom of the class (e.g., schools can decide which students make which journals and employers can decide which students get which jobs); and
(2) We can ensure that there is fairness across sections. If students in Section A have a torts professor who has a median GPA of 2.8 every semester and students in Section B have a torts professor who has a median GPA of 3.2 every semester, it is easy to see how unfairness results.
Of course, we may (and many do) question under (1) how much of a role a student's GPA/class rank should play in journal/hiring/other decisions. And we may question under (2) whether a grading curve does produce fairness. What if students in Section A just happened to be "better" than students in Section B, whether intentionally or accidentally?
But here's the thing under (2): From what I've seen, most law school grading curves aren't like admission fees at museums, i.e., they're not fixed. Instead, they're (often extremely) variable. They're like suggested donations at museums. One patron could give the suggested donation of $5. Another patron could give nothing. And a third patron could give $10. This being the case, what is their point?For example, here is the grading curve for required courses at Seton Hall University Law School (picked at random and at least somewhat similar to curves at many other law schools):
A+ and A 15-25% (see below)
A- and B+ minimum 15% (see below)
B minimum 15% (see below)
B- and C+ 10-25%
C and C- 10-25%
D+, D and F 5-15%
Grades in the first two categories of the foregoing grade distribution schemes should not exceed 50% of the overall class.
Grades in the first three categories of the foregoing grade distribution schemes should not exceed 70% of the overall class.
So, in Section A of Torts (100 students), Professor A could give:
5 A+s and 20 As (25%)
20 A-s and 5 B+s (25%/50% total)
20 Bs (20%/70% total)
10 B-s and 5 C+s (15%/85% total)
10 Cs (10%/95% total)
5 D+s (5%/100% total)
Meanwhile, in Section B of torts (100 students), Professor B could give:
2 A+s and 13 As (15%)
3 A-s and 12 B+s (15%/30% total)
15 Bs (15%/45% total)
5 B-s and 10 C+s (15%/60% total)
10 Cs and 15 C-s (25%/85% total)
2 D+s, 5 Ds, and 5 Fs (15%/100% total)
Now, I don't need to crunch the numbers to tell you that students in Section B are getting the short end of the stick. And maybe it is justified. Maybe, as I noted above, the students in Section A are "better" than the students in Section B. And maybe that's the point of a flexible curve. If I have a terrific set of students in a given semester, I might give grades like Professor A above. If, during another semester, I have a collection of Sweathogs, I might give grades like Professor B above. Most semesters, my grades will fall somewhere in between these 2 extremes. And while a flexible curve gives me some flexibility, it still gives me boundaries so that my grades aren't that different from the grades of a professor teaching a different section of the same course.
In an ideal world, this all makes sense. But does it make sense in the real world? From talking to colleagues, I get the general sense that the answer is "no" because I think that a course's curve tells us more about the professor than her students. I know some professors who consistently award grades at the bottom of the curve and would give lower grades if they could. Conversely, I know some professors who award grades at the top of the curve and would give higher grades if they could. In other words, some professors are "hard" graders and other professors are "easy" graders, and a student in a section with a 3.2 median is "lucky" while a student in a section with a 2.8 median is "unlucky."
But let's say that I'm wrong. Let's say that most professors modulate their curves based upon the performance of each collection of students in each class. It is still easy to imagine problems. In a torts class in fall 2011, the median on the exam is 72 in both Sections A and B. Professor A and B each create curves where the median GPA is 3.0. In the fall of 2012, the median on the exam is 76 in both Sections A and B. Professor A concludes that this was an exceptionally bright class and awards a median GPA of 3.2. Professor B concludes that this was an exceptionally easy exam and awards a median GPA of 3.0. Both professors could be correct, but they both could easily be incorrect. Maybe the students in Section B were exceptionally bright and the exam in Section A was exceptionally easy.
And there's no easy way (as I see it) for Professor A and Professor B to compare notes to ensure consistent results. First, it is pretty tough (at least for me) to determine the difficulty of a test (especially) for a class I didn't teach. I'm Professor A. A good deal of Professor B's exam deals with the attractive nuisance doctrine. Is the exam easy or difficult? I don't know. How much time did Professor B spend on the doctrine? What cases did she use? How good was the class discussion when the doctrine was covered? Second, it is difficult to judge the quality of other professors (or ourselves). Is Professor B so adept at the Socratic Method that she would make Socrates himself blush? Or is Professor B teaching contributory negligence as the majority rule?
My point in all of this is to say that it is very difficult for a professor to determine whether his section is especially bright (or dim) or whether this characteristic is shared by the class of 201# as a whole. And trying to make assumptions based upon reading the exams and exam answers from other sections isn't likely to be helpful.
So, what's my overall point? I think it is that law school curves should be less flexible. I cited to Seton Hall's curve above, and you can see the disparity that can result between two sections under the curve. The same goes at many other law schools. For instance, at Chicago-Kent, 20% of students in Section A might get As or A-s while in Section B that percentage might be 5%. Are these and many other curves too loose, or are there good reasons for keeping law school curves as flexible as they are at many (most?) law schools? Are you an "easy" or a "tough" grader or do you significantly change your curve from semester to semester?
Posted by Evidence ProfBlogger on September 21, 2011 at 04:09 PM | Permalink
TrackBack URL for this entry:
Listed below are links to weblogs that reference What is the Point of Law School Grading Curves if They're Not (More) Fixed?:
We could move to a fixed curriculum and the Oxford system of external examiners.
That has the additional advantage that second year classes won't be filled with students with non-overlapping idiosyncratic knowledge bases.
Finally, over time, it gives an objective basis for measuring teaching quality. Cetainly one year might be a group of strong students, but if Professor X's students are consistently earning higher marks than Professor Y's students, on exams written and graded by neither of them, then Professor Y is doing something wrong.
Posted by: Brad | Sep 21, 2011 4:20:45 PM
The way to avoid this concern is to provide professors with a target, minimum, and maximum mean GPA. At Michigan, the allowed mean variation is between 3.13 and 3.25, unless special permission is received from the dean to deviate from that range (which I suppose is a stop-gap to prevent especially good sections from being harmed by the mandatory curve).
Posted by: Patrick Luff | Sep 21, 2011 4:29:24 PM
I completely agree with Brad.
On a related note, I AM worried about how to fix the "snowball effect" in grading. Individual professors will balk at the suggestion, but top students develop a reputation, and bottom students make convenient scapegoats for that pesky "required C" (now required "B-").
Posted by: AndyK | Sep 21, 2011 4:47:42 PM
Response to Andy - Isn't this why law schools use blind grading? My school uses blind grading, and I had assumed this was the norm, but perhaps I am mistaken. I can honestly say that I have never known after grading the exams which student was going to receive which grade.
Posted by: Stuart Ford | Sep 21, 2011 5:42:43 PM
Good question. Reminds me of another - What's the point of law school when after three years of hard work, curves and $140,000 of tuition you can't get a $10 an hour job as a lawyer's non-legal administrative assistant?
FrankDux (Sep 21 - 6:52 pm)
Ok, so i've known that it's really bad out there....but, wow, now I really feel bad.
My boss put an ad out today for a part-time secretary/clerk to order medical records and obtain Medicare lien amounts. The girl who was doing it before was a joke and barely showed up so my boss got rid of her.
3 hours later my boss calls me into his office to tell me he already had over 50 relpies and most of them were JDs or licensed attorneys. I really felt bad at that moment. He said "I can't hire any of them because once they find an attorney position they will leave in a second."
Posted by: anon | Sep 22, 2011 7:05:46 AM
very cool! thanks!
Posted by: custom term paper | Sep 22, 2011 9:23:39 AM
The response depends on so many factors.
For example, this semester, two of us at my law school teach Business Associations I. The other professor bases 100% of the course grade on the final exam (much of which, frankly, is available on reserve in the library based on previous examinations). My current BA I course relies 10% on participation (explicitly using the Harvard Business School model of participatory grading as articulated by HBS's Stacey Childress), 20% on [*gasp* dare I say it to other law prawfs?] team based experiential learning [drafting documents and relevant governmental filings from start-to-finish], and 70% on the final summative assessment.
I drive my students into the ground with what I require of them. I don't care that they have any other courses to take, I want to develop outstanding business lawyers. Period. Our other BA I professor, while clearly more academically credentialed than I, does the usual 100% of a student's grade based on a written final exam that takes into account hardly any material aspects of VARK.
Perhaps I was the idiot who cleansed his [otherwise satisfactory but for the institution being a state school] J.D. with an Ed.M. from Harvard instead of an LLM, but HGSE taught me how to teach and assess students in law and economics fairly well. And I'm concerned with subject matter mastery. If all of my students [not all of them do or ever will] demonstrate subject matter mastery, then, please explain to me, why, at the law school level, I should use a forced curve to comport with what another professor teaches from another casebook and supplementary material and who uses different andragogical and pedagogical approaches? A marketplace exists, and our school's students should know, at least based on these market data points, that material differences indeed exist between (and soon to be among) our business law prawfs in terms of expectations, grading, etc. The overall mix of law school will likely get students' grades and rankings right. Given my background as a teacher-scholar, versus a scholar-teacher, I'm quite comfortable in my student assessments. However, I *teach* at a FourTTTTh tier school, rather than *profess* at a First tier school. Maybe that makes all the difference.
Posted by: David Groshoff | Sep 23, 2011 8:28:00 PM
That's an awfully big chip. Is it hurting your shoulder?
Posted by: Brad | Sep 27, 2011 5:17:42 PM
it's interesting to read about other's experience
Posted by: bioinformatics questions and answers | Mar 14, 2012 10:57:29 AM
The comments to this entry are closed.