« Rotations | Main | Online Replies to "Punishment and Moral Risk" »

Tuesday, May 01, 2018

Policy questions on law school exams

I am methodical when it comes to grading my exams. I grade question by question, and often subpart by subpart, to maximize consistency of awarding points and to avoid biases from previous answers. On the first question, I'll move front to back through the stack; on the second, I'll pick a random spot in the stack, and I'll move from back to front; and I'll continue on this pace to avoid biases from recent scoring. If I offer a multiple choice component, I scrutinize the biserials and the reliability coefficient, going back over weaker questions and determining if I should throw any out.

Another thing I like to do is to scrutinize the correlation between exam parts, both the multiple choice across each essay (or subpart), or between essays (or subparts). If I get too granular, the data can get noisy, but it's a useful tool to make sure I'm grading consistently and that my questions are fairly consistent.

I've had mixed feelings about policy questions on exams. On the one hand, I fear they can turn into overly-subjective or rambling thoughts loosely related to the course. On the other hand, they can sometimes reflect a student's passion or zeal about the subject, including a deep grappling of elements of the course, that may not be apparent from the rest of the exam. I've come up with pretty good ways to grade these parts--include some clear calls in the question (pick two cases, etc.), require them to address certain elements, and award greater points for deeper analysis.

But each time I've done a policy question, I've noticed that the grading rarely lines up with remainder of the exam. If I have five essays, and one of them is a policy question, for instance, I'll notice fairly high correlations between each of the first four essays. But the correlations with any of the first four essays and the policy question will be almost nonexistent.

Back to two hands. On the one hand, this makes me extraordinarily nervous. Am I grading this element of the exam with less consistency? Are my directions unclear? (But, mostly follow the directions correctly.) Is the policy question simply too subjective? (Then again, I've gone back through answers and never found that a particular position taken earns more credit.)

On the other hand, it's usually the last essay, and some simply run out of time, which tends to make my last essay less reliable in the first place. But more importantly, the policy question is designedly doing something different from the rest of the exam. And that's the point... no? To reflect a different legal acumen than may be obvious from an issue-spotter of legal analysis? So, we might see others thrive differently on this component of the exam--particularly if they're passionate about some element of the course, or have truly thought through a great amount of the material in ways not reflected in the rest of the exam.

I'm sure others have thoughts... how have you approached the policy question? And are answers less consistent with the rest of the exam a sign the question is doing what it's designed to do, or a sign that it's a problem (and, as is often the case, rightly relegated to a slim part of the overall exam)?

Posted by Derek Muller on May 1, 2018 at 09:01 AM in Teaching Law | Permalink

Comments

This is a tough question for the reasons you identify. If one part of an exam is perfectly correlated with another, then I guess you wouldn't need both parts. So some deviation is expected and desirable. We could imagine comparing the correlation between non-policy essay questions and policy essay questions across law school professors and then see how some particular professor's correlations compare. But maybe that professor's policy questions are just different from the norm--it would still be hard to say if they were different in good ways or bad ways. If they differed from the average in inexplicable ways, that would be interesting. Perhaps it would suggests that the exams are graded in ways that warrant further attention. But either way, it seems hard to avoid difficult judgments calls when constructing and grading exams.

Whether exam scores are normalized also seems relevant to the analysis in the post. I think most professors are in favor of normalizing different parts of the exams so that multiple choice questions or policy questions or whatever don't have a greater than expected influence on final grades. But I really don't know what percentage of law professors normalize.

Posted by: Adam Kolber | May 1, 2018 10:04:25 AM

I don't mean to sidetrack your interesting discussion, but could you say more about how you "scrutinize the biserials and the reliability coefficient" for your multiple choice questions. I'm giving my first exam with MCQ and would love any suggestions for how to identify bad questions.

Posted by: Matthew Bruckner | May 1, 2018 11:31:46 AM

That's an interesting outcome. I always include a policy question worth 15-20%, and I've observed something a bit different: the students who score high on the essays usually do well on the policy question (unless they allocated their time poorly), whereas a subset of the students who score in the medium range on the essays do very well on the policy question. Accordingly, this latter group gets a bit of a boost over peers who aren't able to write intelligently about the big picture.

So I see the policy question as serving the right function: allowing me to reward those students who have a competent understanding of the course mechanics but have also grappled more deeply with the course material.

On the other hand, I've heard tell of students who don't really study or outline and instead count on the policy question to put them over the top. It's usually pretty obvious when a student can speak broadly about the course but struggle with issue spotting. In those cases, I see the policy question as a bit of a gift to them, in that it gives me a basis for not failing them.

Posted by: Anon | May 1, 2018 11:37:35 AM

I have a version of this problem in the moot opinions students write in some of my classes. I give them the freedom, as SCOTUS, to overrule precedent, which allows them to go off on policy-based runs. I had one student overrule all of Younger. What I look for is how well they grapple with the existing doctrine and the ramifications of changing that doctrine--that is, the full ramifications of their policy preferences.

I agree it is difficult, in part because I cannot tell whether I am grading the student based on my disagreement with the policy path she is choosing to follow.

Posted by: Howard Wasserman | May 1, 2018 6:31:03 PM

"If one part of an exam is perfectly correlated with another, then I guess you wouldn't need both parts."

I agree, so what's the point of four issue-spotting questions? Perhaps one randomly chosen issue-spotting question is a perfectly adequate test of issue-spotting/analyzing competence. And indeed, maybe just one is an even better test of issue-spotting/analyzing competence than four issue-spotting questions, as students could go into greater depth and begin to say the sorts of things about the problem that one would actually say were one writing a brief or memo about it, whereas four essay questions will tend to elicit relatively superficial answers.

Posted by: Asher Steinberg | May 1, 2018 7:07:41 PM

What is it exactly you are testing with a policy question? I suppose there are some graduates in "law degree preferred" positions that do things day to day that at least sort-of look like a policy based question. Working in a law related think tank maybe, or as a congressional aide. But the overwhelming majority of students will do nothing that looks even close during the course of their careers.

Granted most lawyers aren't doing the kind of appellate litigation that a regular essay question is most akin to, but at least there the old saw about "thinking like a lawyer" applies. A policy question seems more akin to "thinking like a legal scholar". Maybe some professors would rather be teaching in a hypothetical phd program designed to train future law professors, but that's not the model we are heir to and it isn't fair to students to pretend like it is.

(To pre respond to a counterargument, yes on very rare occasion an appellate or even trial brief will be improved by a short appeal to policy considerations. But that's a tiny part of a small corner of the legal profession.)

Posted by: brad | May 1, 2018 7:42:12 PM

Why does the policy question have to go last?

Posted by: Michael Froomkin | May 1, 2018 11:52:50 PM

I am sympathetic to Brad's position, so I don't include policy questions on my exam. I tell my students that the essays will have multiple issues, and the resolution of some issues will be very clear and easy. For those issues, no policy is required. For other issues, however, the doctrine does not lead to a clear answer. For these more difficult issues, I tell them to analogize to the cases and look to the policy behind the rule. I highlight examples in at least one practice essay with a model answer. I think this is more how lawyers in the real world do it, but it does have the drawback that most weaker students never talk much about policy.

I am really enjoying these posts on pedagogy Derek!

Posted by: Jeff Schmitt | May 2, 2018 1:22:56 AM

I have also noticed that my broader essay questions or policy questions do not always correllate well with the issue-spotter questions. (I typically have issue spotters for perhaps 2/3 of my exam and some sort of broader and more synthetic question for 1/3.) But I see this as probably a good thing. Different students have different strengths and sometimes as one of the other commenters said, there's a student with middling issue-spotter scores but a great essay who should get credit for that.

The thing I struggle with is related but different: how to think about variance. Sometimes I have one question where the standard deviation in how may points the students receive is twice that of some other question (where the questions are supposed to be "worth" the same proportion of the exam). Now perhaps that means that question just got more uniform responses from the class. But perhaps it was instead an artifact of my grading, and for idiosyncratic reasons of how I assigned the points, now I've got one question -- maybe the policy question, maybe an issue spotter -- that is basically going to determine which students get which grades, because it has so much more variance than the other questions. Do I adjust the standard deviation downward on that question, smushing the points closer together so it doesn't drown out the rest of the exam? I've sometimes done this but I have no idea really what is fairest.

Posted by: Joey | May 2, 2018 11:17:54 AM

I tend to agree with Jeff Schmitt's response. Some further thoughts:
1. You must decide what you are testing in the policy question. Presumably the ability to make a cogent policy argument in support of one's conclusion.
2. Exams should test what is taught. Does the course teach how to make policy arguments?
3. A policy question may inadvertently give an advantage to a student who has some prior experience with the issue; e.g. a crim law policy question may favor a student who worked in a prosecutor's or crim defense attorney's office over a student with no prior law experirnce or only civil law experience.

Posted by: jerry cruncher | May 16, 2018 5:18:20 PM

Post a comment