« Is Blogging Just Another Boys' Club? | Main | Stanford, Harvard, Yale: A Sample Voters' Guide for This Fall's U.S. News Survey »

Thursday, July 17, 2008

Grading Student Certainty

Just a quick post based on a conversation I had with a colleague the other day: he mentioned that he might give three questions on an exam and ask students to answer all of them. But he also asks them to choose one of the questions they think they did the best on so that will be worth twice as much. E.g., three questions worth 25 points each, but the one question chosen by the student to double up on will in fact be worth 50 points, while the others remain at 25 points each, for a total of 100.

update: I neglected to mention that the student also gets the choice to diversify, such that each of the n questions count for 1/n of the exam points, so students have the choice to avoid making a decision wrt their confidence levels.

I thought this was pretty interesting and hadn't heard of its use in the annals of law teaching. As far as I can tell, the skill underlying the choice of which question to select is figuring out one's own confidence levels in an answer. I can imagine that this might be a helpful skill for lawyers to have in a few situations, such as when a client asks you what the law is on a particular area and you need to be able to assess your confidence levels on the fly. But I'm not really sure that's typically how life in law practice works. Moreover, it's hard to understand what a student might do to prepare in advance of being asked to make such a selection.  How do you develop stronger confidence levels without distorting what you study simply to create a strong suit? Do you solve that "problem" simply by not telling students in advance that this selection strategy will be used on the exam?

That said, if you think of grading as simply a sorting and signaling device, then it's less of a big deal, but still: what are we trying to signal by testing the relative competence of one's selection of confidence levels? I can't tell yet if this is an assessment strategy worth considering adoption in my own courses;  I'm not sure the gains outweigh the benefits costs: am I missing anything?

Posted by Administrators on July 17, 2008 at 09:01 AM in Teaching Law | Permalink


TrackBack URL for this entry:

Listed below are links to weblogs that reference Grading Student Certainty:


As a law lecturer in Australia, I have found that students are terrible at predicting which questions they will do well. Indeed, my own experience as an undergraduate law student showed me that I was terrible myself (although, now, after a few years of lecturing I'd be so much better at it).

Anyway, have written a post on this interesting proposition here. On the whole, I don't think it's a good idea because I think people are either vastly over-confident about their own competence or vastly under-confident.

Posted by: Legal Eagle | Jul 18, 2008 3:53:02 AM

Oops, John, good catch. I'll amend that...

Posted by: Dan Markel | Jul 17, 2008 1:27:42 PM

Totally agree with James Grimmelman's point above: "anything that makes a large part of one's grade turn on a single choice is a bad idea."

Also, as long as professors give students only final exam at the end of a course and make it 100% or close to 100% of their grade, it is awfully risky (and likely unfair to students) to try to use it to test too many different skill set at once -- especially if that skill set is not taught explicitly and systematically in class discussion and homework.

So I'd say perhaps the experiment is justifiable, but only if your plan is to give students multiple mandatory exam exercises over the course of a semester where they have a chance to practice their skills at correctly assessing their confidence -- and where they get very specific advice from you as to how to improve the accuracy of their confidence estimates when they get it wrong. If you don't have a plan for offering such exercises and systematic advice for improvement on this skill (over the duration of the semester), then don't test it.

Posted by: The Mad Learner | Jul 17, 2008 1:09:06 PM

"I'm not sure the gains outweigh the benefits: am I missing anything?"

I think you have revealed that you are leaning towards trying it out....

Posted by: John Smolin | Jul 17, 2008 11:26:34 AM

Testing is essentially a sampling problem. Each student has some "true" grade x which is unobservable so we take sample draws to try to estimate x. Ideally, all of those draws (i.e., test questions) are independent to give us the best shot at getting close to x. However, professors will almost certainly (albeit inadvertently) generate dependence across those sample draws and there's nothing the students can do about that. I view this method as one way of giving students the option of un-doing professor created dependence among sample draws.

Also, note that any professor who takes points off for "wrong" parts of answers (as opposed to simply not awarding points when the right answer is not provided) is implicitly taking confidence into account while grading.

Posted by: Jon Klick | Jul 17, 2008 10:36:24 AM

I don't like it. Imagine the poor student who bombs one question by going off down a blind alley, but does a solid, credible job on the other two. Under these circumstances, doubling the wrong question has a huge, negative effect on her grade. It's pretty hard to say that a single mistake in setting confidence levels should hurt that much.

In general, anything that makes a large part of one's grade turn on a single choice is a bad idea. A few students will always gets it wrong, and that one choice ends up dominating everything else in determining their grades.

Posted by: James Grimmelmann | Jul 17, 2008 9:48:18 AM

The comments to this entry are closed.