« On the "Left Turn" at the Volokh Conspiracy | Main | Revisiting One L and Farewell »

Thursday, September 01, 2011

Tough Tests, Take 1: Do You Ever Bump the Grades of Students Who Run Out of Time on Your Exams?

I would like to thank Dan Markel for having me back as a guest blogger for September. Some of my posts this month will deal with "tough tests," exams received from students that I don't quite know how to grade. Here's the first example that I get once or twice a semester: the student who (apparently) ran out of time on the exam. If you give a time pressured exam, I've sure you've seen it. Last semester, I had a three-hour exam with 4 short(er) essay questions, each worth 25% of each student's grade. I had two students who (apparently) ran out of time on the last question before writing much down. Student A's answers on the first 3 questions put her in the A range at the top of the curve while Student B's answers on the first 3 questions put her in the B range in the middle of the curve. But because Student A barely answered the last question, her aggregate score dropped her to the middle of the curve while Student B's failure to (really) answer the last question dropped her near the bottom of the curve. So, here's the basic question: Do you treat Student A, who averaged a 23 over the 1st 3 questions the same as Student C who averaged an 18 on the first 3 questions because they each scored a 72 on the exam? (Student A getting 3 points on the last question and Student C getting 18 points). Or do you sometimes bump the grade of the student who (apparently) ran out of time  on the exam? (e.g., bumping Student A from a B to a B+). Let's start with a poll, and then I will give some of my thoughts on the subject and welcome any comments on what readers think and do.

Do You Ever Bump the Grades of Students Who Run Out of Time on Your Exams?
Yes
No
  
pollcode.com free polls 

 

Here are a few reasons why I have never (yet) bumped the grade of a student who ran out of time on one of my exams. First, I analogize my essay exams to multiple choice exams. If a student were doing gangbusters on a multiple choice exam but then failed to fill in the last 10 bubbles, I wouldn't think about bumping the student's grade based on the belief that she would have done at least a pretty good job on those last 10 bubbles. So, why should I bump the grade of the student who barely responded to the last question on my essay exam based upon a similar theory?

Second, how do I know that a student really ran out of time on the exam as opposed to lacking the knowledge to answer the last question? If I were to bump the grade of a student, it would be because I thought that she really knew the material and merely had time management issues (which I care less about). But how do I know it was a time management issue? Sure, the student might write "RAN OUT OF TIME" or abruptly end her exam with a few frantic sentences, but was that because the student got to the last question with only a few minutes left? Or did the student stare at the question for a good half hour before trying to write something down despite not having much to say? The question becomes even more complicated if the student does the essay questions out of order, meaning that, say, question 2 is the question that goes largely unanswered. In that situation, it seems more likely to me that the student saved that question for last because it was the toughest question for her to answer.

Again, I will analogize this situation to a multiple choice exam. Even if I wanted to bump the grade of a student who (apparently) ran out of time on a multiple choice exam, how would I be able to identify such a student? I think that most of us have had ingrained in us since an early age that if we're running out of time on a MC exam, we leave no bubble unfilled. And some of us fill in the same bubble in this scenario. So, a string of 10 straight Cs on a MC exam gives me a pretty good indication that the student ran out of time. But maybe the student really thought that the answer to each of those questions was C. And more importantly, what if another student ran out of time and staggered her answers to the last 10 questions, filling in a seemingly random assortment of bubbles? I would never be able to detect this, and it seems patently unfair that such a student wouldn't get a bump while another student who more obviously ran out of tine would.

The same could go on essay exams. Maybe two students both got to the last question with 10-15-20 minutes left. Student A froze and wrote almost nothing while Student B was frantically able to write a good deal, but most of it was incoherent or wrong. I might interpret this as Student A running out of time and Student B writing a bad answer, but I would hate the thought of bumping Student A's grade and not Student B's grade.

Third, while I value depth of knowledge and analysis (much) more than time management, my exams are time pressured for a reason. I want the students who learned the material well throughout the semester to be able to distinguish themselves from students who learned the material less well. By making my exams time pressured, I prevent those who slacked from "catching up" on my finals (which are open book). Sure, Student A might have spotted and comprehensively addressed each issue in the first 3 question, but if she ran out of time on the last question, that means she spent more time on those questions than Student B. So, Student A's scores might be 23, 23, 23, and 3 while Student B's scores might be 18, 18, 18, and 18. That's a 72 for both, but Student A's 72 *feels* like it is artificially low. But is that because Student A is a "better" student than Student B, or could Student B also have gotten 23s on the first 3 questions if she spent more time on them? It seems unfair to Student A to give her a B when the rest of her exam screams A, but it seems unfair to Student B to bump Student A's grade.

Fourth, I give my students an ungraded practice midterm and an ungraded practice final that are the same length as the actual final exam. I tell them to take both of these under timed conditions and to meet with me if, among other things, they are unable to answer all questions in (around) the time allotted. So, my students know the score. I would feel a lot more guilty about not bumping the grades of students who (apparently) ran out of time on my exams if they didn't know the score going into E(xam)-Day. 

So, those are some of my reasons for not bumping the grades of students who (apparently) ran out of time on my exams. That said, I'm not at all certain of my position, which is why I'm posing the question here. I guess this is he crux of the issue for me: Do you know an A (or a B) exam when you see it? In other words, on the one hand, do we really think that the student who performed (pretty) consistently on all 4 questions and got a B would have done *that* much better if she spent, say, an extra 10 minutes on each question? And, on the other hand, do we really think that a student who was an A student on questions 1-3 and ran out of time on question 4 would have done *that* much worse if she spent, say, 10 minutes less on each question? Frankly, I don't know, and I don't know that I should bump the grade of the latter student even if both of these propositions are true. That said, it just seems so harsh to me that a student who knows the material cold and knows how to analyze the material well could get an "unrepresentative" grade based upon time management issues under our "one exam to rule them all" system.

But those are just the thoughts of one man. What do others think?

-Colin  Miller

Posted by Evidence ProfBlogger on September 1, 2011 at 08:58 AM | Permalink

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8341c6a7953ef015391345aab970b

Listed below are links to weblogs that reference Tough Tests, Take 1: Do You Ever Bump the Grades of Students Who Run Out of Time on Your Exams?:

Comments

I don't get why this is even an issue. You grade what's there, not what you imagine might have been there. If they don't answer the question, they don't get the points. I am grading work product, not people.

This is not my idea of a tough question.

A hard question for me is a student who doesn't know the meaning of somewhat but not utterly common non-legal word that I happen to use in a question, a dictionary word that is not a legal term, guesses wrong, and thus writes an essay that misses the point. This happened once, early in my teaching career (and I am sorry to say I don't even remember now what the word was, but I do remember it was not a foreign student).

Posted by: Michael Froomkin | Sep 1, 2011 9:30:40 AM

Michael, thanks for the comment. From the poll responses so far, it seems like I indeed might be worrying about nothing. And the question you raise about a student misinterpreting something in a question and going off on a tangent that states that law correctly but doesn't answer the question asked is something I plan to raise in a future post.

Posted by: Colin Miller | Sep 1, 2011 9:41:03 AM

I agree with Michael that this is really a non-issue, mostly based on your third point.

Posted by: Patrick Luff | Sep 1, 2011 10:33:23 AM

Michael and Patrick, I agree that it seems clear what you should do ex post, although I have heard of professors who don't give a last essay question full weight in the grade if it's not complete. But like Colin I hate it when I see the situation he describes. So what I do in my classes with time-limited (and open-book) exams is stress repeatedly that I will give each section equal weight, and they MUST manage their time accordingly. I also warn them that although the exam is open-book they will NOT have time to re-read anything, only refresh their recollections of precise wording (and the fewer times they have to do that, the better). I also, when I can, make my old exams available for timed practice. I still see people who run out of time, but I feel a little better about it knowing all of the warnings I gave.

Posted by: Bruce Boyden | Sep 1, 2011 11:00:51 AM

I think Professor Miller is right to worry, but perhaps he is focusing on the wrong solution to what is a very real problem. The problem, of course, is that the traditional law school exam does a terrible job of testing for the knowledge, skills, and abilities that lawyers need in practice. In practice, lawyers do nearly everything on an open-book basis, face only a very small number of issues at at time, and rarely work under intense time pressure. Traditional law school exams, however, place great emphasis on memorization, issue spotting as opposed to the quality of argument, and, to an enormos extent, the ability to work under great time pressure. The answer, it seems to me, is not to test for ability to work under time pressure and then change the grading matrix when you don't like the result (see Ricci v. DeStefano), but to change what you test for.

Larry Rosenthal
Chapman University School of Law

Posted by: Larry Rosenthal | Sep 1, 2011 11:20:35 AM

Larry, I definitely agree that time pressured exams are not ideal. I would love to give a take home exam to students. It might even look somewhat like the Multistate Performance Test, where I would give students a closed universe of resources and have them do tasks like draft memos, interrogatories, etc. (maybe I will have more on this in a later post). That said, I just can't shake concerns about student cheating. And it's not that I have any specific reason to think that my students or students at my school are cheating. Instead, it is based upon surveys like this:

http://lawstudentethics.blogspot.com/2009/03/is-cheating-contagious.html

where 45% of law students admitted to cheating. Here's a post I did about the subject a few years ago:

http://prawfsblawg.blogs.com/prawfsblawg/2009/04/before-teaching-my-first-class-i-knew-that-the-first-question-i-had-to-answer-before-even-choosing-the-casebook-or-deciding.html

Posted by: Colin Miller | Sep 1, 2011 11:39:26 AM

I agree with Michael and Patrick that this is a non-issue. If you don't want to see students run out of time, don't give a time-limited exam. If you give a time limited exam but then give a bump to a student who runs out of time, you penalize the student who played by the (apparent) rules until you changed them.

Posted by: TJ | Sep 1, 2011 11:58:51 AM

Colin:

I never worry about cheating when I use a nontraditional assessment exercise. For example, I give my students briefing exercises with a record that I have created, and then let them have at it. If they can figure out how to "cheat," whatever that means, more power to them. I'm not sure what cheating means in the materials you have referenced -- probably bringing banned material into closed book examinations or plagiarism in the useless papers that are so often written for upper-level courses. But, if cheating is so widespread in traditional exams, why use them? In an assessment exercise that mirrors the challenges faced in practice, it is not really possible to cheat, is it?

Larry

Posted by: Larry Rosenthal | Sep 1, 2011 12:18:00 PM

"For example, I give my students briefing exercises with a record that I have created, and then let them have at it. If they can figure out how to "cheat," whatever that means, more power to them. I'm not sure what cheating means in the materials you have referenced . . ."

I think the obvious problem is potential collaborating, or otherwise submitting work that is not their own, in connection with unsupervised exams or similar means of evaluation.

Posted by: Ani | Sep 1, 2011 12:27:40 PM

Larry, the worry that I have about cheating in the types of nontraditional assessment exercises that (I think) you give is that students would "work" together/share their answers. For example, Student A might tell Student B, "Here are the 3 ways in my memo in which I distinguished Case A in the record from the facts of the case assigned." That said, maybe I am too concerned about cheating because, as I noted, I have no specific reason to believe that my students would cheat if given the chance. And maybe the way that you construct your assignments makes it so that cheating is (nearly) impossible.

Posted by: Colin Miller | Sep 1, 2011 12:30:15 PM

I agree that blank answers should generally get zero points, but I've faced some more ambiguous issues.

Sometimes (say) Question #1 will touch on a particular issue and the student will clearly demonstrate her understanding of that issue. Question #4 will be left blank (likely due to time pressure), even though one of the key issues it raises is identical to one of the issues discussed in Question #1.

In these circumstances, where it seems fair to infer that the student actually could answer Q#4 with more time, is it appropriate to award a zero? I have reluctantly done so (for many of the reasons discussed in the comments, esp. regarding the analogy to m/c tests), but this does seem to present a different problem.

In one circumstance, though, where a student wrote an embellished answer to the issue in Question #1 (and the issue wasn't intended to be fully addressed in Question #1), I did give him or her some credit in Question #4. The student had provided the right answer but only in the wrong place, I reasoned.

Posted by: andy | Sep 1, 2011 12:36:29 PM

On the cheating point: If we are going to certify to the bar that our graduates are fit to practice law, should we take so many steps (Examsoft etc.) to try to prevent cheating?

It seems odd to me to tell the accrediting organization to welcome our students, but nonetheless expect the worst of them in our own house. We should not certify that any student is fit to practice law if we don't trust him or her to observe a closed book requirement.

When I was a student at Georgetown, all exams were taken using Microsoft Word, with no technological restrictions to accessing the internet and so on. I thought that approach accorded the appropriate amount of respect for students.

Of course, some students do cheat, but simply trying to prevent these students from cheating while they're with us and then sending them off to the legal profession doesn't accomplish much, in my view.

Posted by: andy | Sep 1, 2011 12:48:01 PM

Andy, thanks for the comment. That question is an especially difficult one for me. A few semesters ago, I gave an Evidence exam in which two separate questions contained statements that legitimately could have been construed as excited utterances. In the earlier question, the students did a spot on analysis of the elements required for a statement to qualify as an excited utterance and ultimately concluded based upon the facts of the question that the statement did not qualify. (which I think was the right conclusion). The last question contained a statement that likely DID qualify as an excited utterance. Thus, I would have expected this student to spot this issue based on her answer to the previous question in which the statement looked less like an excited utterance. And, based on her previous analysis, I imagine that her analysis on the latter question would have been spot on as well.

Unfortunately, she was 1 of 2 students that semester who didn't get to the last question. I struggled with whether to give her any credit for the last question but decided against it. I hope that I did the right thing.

Posted by: Colin Miller | Sep 1, 2011 1:00:48 PM

Colin:

In nontraditional assessment exercises, I encourage students to work collaboratively. The actual practice of law, if it is done well, is colloborative. Lawyers who do not vet their ideas with others take an enormous risk. Unfortunately, in larger courses my school uses a curve that creates a disincentive for collaboration, but the ability to improve work product through collobaration is one that we should encourage -- even test for -- IMHO.

Larry

Posted by: Larry Rosenthal | Sep 1, 2011 1:11:46 PM

Larry, I agree that collaboration is a very valuable skill to teach to law students. If I ever teach a non-curved class, I definitely plan on assigning many collaborative exercises.

Posted by: Colin Miller | Sep 1, 2011 1:31:55 PM

I agree with the thrust of Larry's comments here. The problem is in relying on time-pressured exams. Time-pressured exams give great weight to the ability to do things quickly, which is a skill that lawyers use sometimes (Ever had to write an opposition to an emergency motion? I sure have.) but is certainly not more important than -- or even as important as -- the ability to analyze the elements of a legal problem, identify the areas of doctrine that potentially apply, construct arguments for why those areas of doctrine do or do not lead to particular results, and apply the lawyerly judgment to evaluate the strength of those arguments. (Here I'm just talking about "doctrinal" courses and questions. Obviously there are lots of other things that you want to teach and test in law school than this.) Colin's argument for using time-pressured exams -- that students may cheat by improperly collaborating -- is one I find troublesome. Unlike Larry, I'm willing to say that, in individually curved courses, it makes sense to prohibit collaboration on final exams (though I often encourage collaboration on required exercises in those courses). But I don't think it makes sense to use an invalid testing instrument -- in the sense that it doesn't test the skills we think are important, or that it weights certain skills beyond their importance -- because of a fear of cheating.

Posted by: Sam Bagenstos | Sep 1, 2011 2:21:38 PM

As a first year professor, I've been struggling with many of these questions as I think about what I want to do with my exams this year. One point that interests me, though, is this idea that collaborating is cheating. It's only cheating if the professor says that they are not to collaborate. I do understand that the curve system might create a strong disincentive to collaborate, but that shouldn't necessarily be reinforced by labeling collaboration as cheating. Am I naive to think that if a professor were to lay out certain ground rules for collaborating -- what is allowed and what isn't, identifying who each student collaborated with, etc. -- that nearly all students would be willing to abide by these restrictions as a cost of gaining the benefit of collaborating?

Posted by: Michael Teter | Sep 1, 2011 2:50:27 PM

Sam, I think time-limited exams test something else too, which may be more important than one's ability to work quickly under pressure -- they test how well you know the basics of a subject off the top of your head. For first-year subjects, and even many electives, the course is essentially full of basic material that an attorney practicing in that area should not even have to look up (or will only have to look up to refresh their recollection of something they once didn't have to look up). How much of the basic information and concepts of the course subject have you absorbed? The time-limitation, in my mind, is there to ensure that this is actually what you're testing, as opposed to a student's ability to read selectively from a limited set of materials and digest it. In practice, you don't have time to research everything; you have to start somewhere.

I do give take-homes in my upper-level classes where the doctrine we cover is less foundational, even for an attorney practicing in that area, and my thought is that a close understanding of the broader themes of the course will be necessary to get the finer nuances. But the danger of that approach is that it may fail to adequately distinguish the student who hasn't really mastered all of the course material, just the small sample of it that is actually tested on the exam and that they learned only after the exam was handed out. I don't think that's terribly reflective of practice either because in practice your universe of cases or statutes or issues is not neatly identified for you in advance; but if your class doesn't cover legal research, I don't think you should be testing on it in the final, so you are limited to the assigned materials.

Posted by: Bruce Boyden | Sep 1, 2011 3:15:31 PM

Michael, from what I have read (and I think this makes sense), there are 2 primary things that come out of students working collaboratively: (1) An increase in the quality of work produced by the class as a whole; and (2) An increase in the clustering of results. For a non-curved class, I thus think that collaborative assignments make a great deal of sense because students ostensibly learn the material better and produce a higher quality of work product. But for a curved class, it seems like it could create real difficulties in distinguishing among students.

I think that the question regarding what students would prefer is an interesting one. If we're talking about students in a curved class, i don't know what the results would be. For instance, with students at or near the top of the class, I could see arguments in favor or collaboration and arguments against it. A top student might be against collaboration because she is already doing very well in classes and fears that collaboration would level the playing field. On the other hand, she might be in favor of collaboration because, from what I've seen, people at the top of the class tend to study together and would presumably work together on exams/assignments if collaboration were allowed. And I think the other pros and cons of collaboration apply to other students as well.

Posted by: Colin Miller | Sep 1, 2011 3:33:32 PM

A non-trivial percentage of students will cheat on any take-home exam. The stakes are simply too high, and honor will bow to extreme pressure for many people.

Do you really think the boyfriend/girlfriend couple in your Torts class aren't "running their answers by each other," or that very tight-knit study groups aren't doing the same, we need to speak about the deal I can get you on the Brooklyn Bridge.

Grades determine jobs. Take-home exams in doctrinal classes, especially for 1L classes, mean that cheaters are more likely to get the best grades and jobs. I don't feel the need to explain why this is wrong.

Posted by: GU | Sep 2, 2011 11:34:58 AM

When I realized I had failed some of my courses again last semester, I knew I had no choice than to find a way to get my grades up and above average. I was at wits end and did not know what to do. Someone posted something about a particular guy called Reputable Hacker being able to help. I thought I should try him out since I had nothing to lose. I contacted him and to my surprise and amazement, he was actually able to help me increase my grades and get me the points I needed to get above average. I was really thrilled and I promised him to recommend more people to him. Try him out if you have school grade issues, infidelity issues in your relationship or to hack whatsapp, text messages, facebook, instagram etc.
His contact is; gmail - reputablehacker

Posted by: Alexis | Jan 16, 2017 9:39:39 AM

Post a comment