« John Oliver on electing judges | Main | Yale's proposed faculty-conduct code »

Tuesday, February 24, 2015

Do Law School Exams Encourage Bad Legal Writing?

Do law school exams teach lousy legal writing? I am thinking of the “issue-spotting” exam in which the student is expected (or thinks that he or she is expected) to touch on as many issues as possible to demonstrate that he or she did her time in the course, taking notes, briefing cases, and soaking up information. Typically, such exam answers consist of lots of points hurriedly raised and rarely resolved or argued effectively. Such answers often adopt an indecisive “one-hand-other-hand” style of a bad bench memo, noting that there are opposing arguments on a point but not making any effort to evaluate whether and how one argument is better than another.

These symptoms of a certain type of exam answer writing also seem to be characteristics of bad legal writing by young attorneys starting out as associates, at least according to senior partners that I canvassed a couple of summers ago, in an effort to learn how to improve NYU’s legal writing program. The most common complaint was that new hires’ emails, memos, and draft briefs did not make an argument for a particular position. Instead, the novices summarized too much at too great length without arriving at any plain bottom line. “Don’t they know they we’re paid to be advocates?” one lawyer complained. “Clients pay for answers, not encyclopedias,” said another.

Law students, however, pay to take issue-spotting exams. And sometimes I think that this genre corrupts their legal writing later, by causing them to slight the ranking and evaluation of arguments in favor of the spotting of issues and the quick summarizing of arguments without really evaluating them.

I’ve tried to move away from the sort of exam that induces this response from students, and I am inclined to think that, at least with the right sort of exam question, the following piece of advice from Howard Bashman on writing effective appellate briefs should apply to exam-writing as well:

Experienced appellate advocates agree that raising too many issues on appeal hurts, rather than helps, the appealing party. Raising one to four issues on appeal is best; raising a few more issues than that is acceptable when absolutely necessary. In United States v. Hart, 693 F.2d 286, 287 n.1 (3d Cir. 1982), the Third Circuit endorsed Circuit Judge Ruggero J. Aldisert's statement that "when I read an appellant's brief that contains ten or twelve points, a presumption arises that there is no merit to any of them." It does not suffice merely to raise an issue; be sure also to include argument on the point in the argument section of your brief.

Posted by Rick Hills on February 24, 2015 at 05:54 AM | Permalink

Comments

Thanks for the helpful comments, all.

1. Orin, you are surely correct that we need to communicate our expectations. I try to do so by distributing numerous actual and model answers, all marked up with my grading and comments in the margin. My weekly problems are actually drawn from past exams, which are drawn from real-life problems, and I go over each of these in recorded office sessions.

But communication is not enough if we do not teach to those expectations. It is not enough to say, "I award points for quality of argument, not quantity of points," if students really do not know how to evaluate the quality of an argument. As I noted in another post, I tend to think that there is tension between information transfer and practice making and evaluating arguments. I am not quite sure how to accomplish both of these tasks simultaneously. One cannot maximize across two variables simultaneously of course, but figuring out the mix and the strategy for achieving it is tricky.

2. Skeptical, I am skeptical of this "craft-of-the-lawyer" rhetoric. I appreciate that there is no substitute for experience practicing law. But why do you think that the experience of engaging in legal forensic writing is so ineffably unique that one cannot improve it until one is on a law firm's payroll? That notion strikes me as magical thinking. I write plenty of briefs, alone or in collaboration with other lawyers, and I have not noticed that what counts as a persuasive argument in a brief radically differs from what any good persuasive forensic prose requires. Moreover, my collaborators (all veteran lawyers)confirm my own instinct: The basic argumentative virtues of brief-writing are not somehow magically different from good forensic virtues in any other context.

In my seminar on the Law of New York City (a course on local government law taught through the lens of NYC's legal controversies), the students brief and argue current controversies before panels of distinguished lawyers, including my co-teacher, Peter Zimroth, veteran Arnold & Porter litigator and former NYC corporation counsel, and, now, federal stop-and-frisk monitor. Peter is a really tough critic in his assessment of the students' writing, but his criteria tend to be identical to my own. Conclusory assertions and repetitions of catchphrases from judicial opinions, for instance, do not count as arguments. Students at law school, like Hogwarts, like magic words: They tend to repeat terms as if those terms' meaning are self-explanatory or uncontested. Teaching them not to do so is part of our job here at Law School. The notion that they simply cannot learn this aspect of the job until they're on a law firm's payroll strikes me as odd. What's the basis for this idea?

Posted by: Rick Hills | Feb 25, 2015 9:09:49 PM

I agree with AnonProf13. Give more assessments so that students have a clear idea of your expectations. We do approximately one assessment per week in my 1L contracts class (not all big or evaluated by me, but we do those too) and I think it gives students a clear set of expectations for the final exam.

Posted by: Matthew Bruckner | Feb 25, 2015 12:43:21 PM

I always find it curious how we often articulate these issues as "Why don't all these students just GET IT?" It seems to me that if the problem is so pervasive, as you describe, then students' (incorrect) assumptions are not irrational. If they were, fewer students would have the problem. The problem, therefore, lies on our side of the ledger. We have to remember we're dealing with absolute novices here, and we must be more clear in articulating expectations and learning objectives. (Which does not mean coddling or hand-holding, mind you).

If one teaches a class in a way that consistently encourages students always to look for the other side (which is not a bad thing), one should expect that students will think the exam requires the same mindset. There's certainly nothing wrong with teaching this way, but we need to un-teach students' possible misinterpretations. Simply holding a low-stakes midterm exam, and distributing a model answer showing the right way to analyze a problem, goes a long way towards communicating expectations.

Posted by: AnonProf13 | Feb 24, 2015 2:36:26 PM

I concur with Orin that the key is to clearly communicate your expectations to students and then reward those who perform as expected. One of my 1L professors gave us a one-hour mock exam halfway through the semester and then gave us access to his model answer and scoring rubric to grade ourselves. Another 1L professor repeatedly emphasized that his exam questions ask for advice, not just identification, and he graded accordingly.

Personally, my favorite exams are take-home exams with strict word limits. They require the students to weigh the value of including additional arguments against the reduced space for the treatment of each argument. I think it is a useful exercise in judgment.

Posted by: Vapplicant | Feb 24, 2015 2:20:39 PM

I'd say that it depends both on how you grade and how the students believe you grade. Faculty members have reputations among student bodies with regard to grading that are passed from older to newer students. Those reputations are heeded just as much, if not more, than how the professor says they will grade. I think a format shift, and clear grading based on the announced new format, would have to be in place for at least a year or two before wide-spread student adjustment would take place.

Posted by: Former Editor | Feb 24, 2015 2:14:58 PM

Doesn't it all depend on how you grade? You should adopt a grading method that matches your view of what is the most effective approach for a lawyer, and then tell your students ahead of time what kinds of answers will lead to the highest grade. Students will change their styles to match your grading method.

Posted by: Orin Kerr | Feb 24, 2015 1:47:09 PM

I can’t imagine law students are confusing a memo to a law partner with a law school exam. I think you’re not giving your students any credit with that charge. You might as well allege that they are not providing a thorough enough answer because they are used to text messaging.

My strong sense is that many young lawyers aren’t great about providing an answer because they don’t feel confident that they can accurately do that. And the reality is that most of them can’t. Not because they are too stupid or not educated enough. It’s because providing legal analysis not only takes smarts and education, but it takes experience and judgment, something you can obtain only over time. Indeed, I think these latter qualities are by far the most important.

I think this gets at a lot of the misguided attempts by law professors to help law students become lawyers. The reality, from my point of view, is that you can’t really accelerate the learning curve of being a lawyer very much. So much of being a good lawyer is having good judgment. And having good judgment is tied, in large part, to having a large wealth of experience. I think there’s things law schools can do to help move that along. But I think you’re taking about small marginal improvements, rather than anything that’s going to be particularly meaningful in most cases.

This is a long-winded way of saying: I doubt that the type of test you use will have anything but a fleeting impact on the ability of your students to be functioning lawyers.

Posted by: Skeptical | Feb 24, 2015 12:11:53 PM

I give exams that require issue-spotting. But I test for (and emphasize to my students that I'm testing for) two things in particular. The first is the one Michael Risch mentions: the judgment to distinguish the hard issues from the easy ones. Simple issues with a clear right answer might be worth a point. Subtle issues with facts going both ways and dueling precedents might call for four or five points worth of analysis. The second is organization. I ask students to group logically related issues and to explain how issues fit together. In the courses I teach, the claims and defenses like to hunt in packs: if you have a client who is looking for IP protection for a three-dimensional object, you need to be ready to discuss the copyrightability of useful articles, functionality of trade dress, and design patents. I bring out this grouping in class, and ask students to bring it out on the exam.

This is not to take away from your points, which strike me as good reasons to write different kinds of exams. My point is just that within the genre of the issue-spotting exam, it's possible to do more to emphasize "the ranking and evaluation of arguments."

Posted by: James Grimmelmann | Feb 24, 2015 8:02:47 AM

I've moved to multiple choice for some of my classes because I got so tired of these types of answers. I tell my students explicitly that the goal of the test is to see whether they can recognize times when the law provides an easy answer and times when the outcome depends on the interpretation of facts or missing facts. It's worked pretty well so far.

Posted by: Michael Risch | Feb 24, 2015 7:48:59 AM

The students just don't believe us, Howard: Perhaps they've been conditioned by undergrad and high school exams to believe that they will be evaluated by the number of points that they "hit," not by the effectiveness with which they argue for the points. I call it the "Highlights" Magazine approach to exam-taking. Remember the back cover in which you were supposed to find all of the hidden objects in the picture? They think that I have hidden objects in the facts, and they just need to circle them to win.

Posted by: Rick Hills | Feb 24, 2015 7:13:50 AM

Rick: I entirely agree. I have always asked precise questions ("Is there personal jurisdiction over the D here") and often add to that by making them adopt a particular role with a specific position ("You are defense counsel; argue that your motion to dismiss should be granted."). The frustrating/disappointing part is that too many students ignore the instructions and fall back into the other style. So, when asking someone to be the judge writing the opinion, you see things like "The Court might hold", etc.

Posted by: Howard Wasserman | Feb 24, 2015 6:59:36 AM

Post a comment