« Reply to Jack Chin | Main | Fluency »

Wednesday, February 16, 2011

Evaluation of Teaching for Tenure

As I noted in a post last week, some colleagues and I are reevaluating our law school's tenure standards and procedures.  We've just begun to talk about teaching evaluation.  Our current practice is to rely primarily on three sources of information: (1) student evaluation scores (derived from forms students fill out in class near the end of the semester), (2) classroom visits by tenured colleagues, and (3) written evaluations from one or two dozen randomly selected former students.  There is some dissatisfaction, though, with the weight given to the student evaluation scores.  It is sometimes argued, for instance, that students unfairly penalize rigorous, intellectually challenging teachers, while rewarding teachers who demand little and have reputations as generous graders.

I don't know if these charges are true, but I do agree that our deliberations would sometimes benefit from more information than we normally have.  In particular, I sometimes feel myself at a loss when confronted with a consistent pattern of below-average scores.  Such a pattern is surely at least a troubling a sign, but, on the other hand, it is logically impossible for everyone on a faculty to be above average.  My concerns are heightened in the rare instances when I see scores falling more than one standard deviation below the faculty mean, but, even then, there seems always to be some extenuating circumstance or another (e.g., first time teaching an especially difficult course).  Sometimes classroom visitation reports and narrative evaluations from former students help to illuminate what lies behind a pattern of below-average scores, but not always. 

As our committee begins to bat these issues around, I'm trying to think of ways that we might helpfully enrich the available information when we evaluate teaching.  My working list appears after the jump.  As in my previous post, I'd welcome reactions either in the comments or off-line.

1.  Use a mix of announced and unannounced classroom visits.  At present, we rely almost entirely on announced classroom visits, which raises concerns that our classroom visitors may be seeing unrepresentative teaching performances.

2.  Video record one or two classes per year so that all members of the Promotion and Tenure Committee can view a sampling of a candidate's classes.  Our new building has built-in recording equipment that can be used much more unobtrusively than would have been possible in our older facility.

3.  Require more classroom visits.  At present, our untenured folks normally have one class visit per year (which is conducted by two tenured faculty members who normally prepare a joint report).

4.  Consider grading data alongside student evaluation data.  If a faculty member is a demonstrably tough grader, that might help to explain poor evaluations. 

5.  Interview some former students, rather than relying solely on written statements from them.  A more informal back-and-forth exchange may generate more information and help to clarify vague comments.

6.  Collect and disseminate narrative comments from students on the end-of-semester evaluation forms.  At present, the P&T Committee normally only evaluates the numerical scores.

7.  Report and assess scores in a more nuanced way.  At present, we normally evaluate scores in comparison to the overall faculty mean.  It is possible that this disadvantages faculty members whose teaching packages are weighted towards large required courses and benefits faculty members whose teaching packages contain more small-enrollment electives.  In order to address this concern, we might report separate faculty means for different categories of classes.

8.  Evaluate the quality of student work product (papers and/or exams).  It is hard to imagine that it would be cost-benefit-justified for the P&T Committee effectively to regrade a stack of 50 exams, but perhaps an appropriate random sample could be assessed.

I should be clear that I see drawbacks to all of the items on my working list, so I don't necessarily expect to be advocating for any of them. 

Posted by Michael O'Hear on February 16, 2011 at 02:55 PM in Life of Law Schools | Permalink

TrackBack

TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a00d8341c6a7953ef014e5f42709e970c

Listed below are links to weblogs that reference Evaluation of Teaching for Tenure:

Comments

Thanks to all who contributed to this thread -- there are some excellent ideas here that I will share with my colleagues.

Posted by: Michael O'Hear | Feb 21, 2011 3:02:44 PM

Here are a few additional resources to consider:

- Nancy Van Note Chism, Peer Review of Teaching: A Sourcebook (2007). [This contains a lot of practical suggestions and concrete ideas. Representative sub-headings include: "setting up a peer review system"; "characteristics of an effective peer review system"; "common pitfalls to avoid".]

- William Pallett, "Uses and Abuses of Student Ratings," chapter in Peter Seldin, Evaluating Faculty Performance: A Practical Guide to Assessing Teaching, Research & Service (2006). [This one focuses more on combining feedback from students and other faculty, as well as the faculty member's own reflection, when evaluating teaching quality. It also has some thoughts on what types of topics are best suited for student evaluations.]

Neither source is specific to legal education, but they do both focus on teaching at the college/ university level. I think both contain a lot of information that may be responsive to the questions you describe.

Posted by: Kristi Bowman | Feb 18, 2011 4:04:38 PM

I think that #2 and #7 seem like the best ideas. Last year, I proposed the idea of outside teaching reviews like outside scholarship reviews, but I don't know how practical they would be.

http://prawfsblawg.blogs.com/prawfsblawg/2010/09/law-schools-traditionally-conduct-an-outside-scholarship-review-when-a-professor-goes-up-for-tenure-for-instance-at-my-scho.html

Posted by: Colin Miller | Feb 17, 2011 3:14:41 PM

I agree with Scott up to a point--I always worry about students' views of workload, which tend to always come down on the side of "there is too much work in this class."

Posted by: Howard Wasserman | Feb 17, 2011 10:19:49 AM

Do you have other questions on the student evaluation form to consider? We have around 15 questions. The last one - "overall effectiveness" - is the gold standard, but other questions provide useful information. For example, one question asks whether the workload in the class is lighter, about the same, or heavier than that in other courses. That question's score might help distinguish a rigorous class from a less rigorous one. Another question asks whether the teacher was prepared. Another asks whether the teacher was respectful of students. I guess, in other words, you might consider what else is asked of the students.

Posted by: Scott Dodson | Feb 16, 2011 9:00:23 PM

For schools with the technical capacity to do so, tape all the classes and have the committee view multiple full classes and a random sample of the rest?

Posted by: James Grimmelmann | Feb 16, 2011 4:26:52 PM

Definitely consider # 6, which occasionally (although not always) gives better information than the numerical scores. In fact, we encourage our junior faculty to maintain a single document with verbatim transcriptions of all the student comments from every class, which then can be included in the tenure file and considered.

I like the idea of taking into account the type of class and, without reviewing things, looking at the type of work product demanded. Is the teacher doing something creative and thoughtful in the classroom--whether or not it affects evaluations? Speaking personally, I tend to discount somewhat the evaluations from seminars and small niche classes, since the students tend to be a self-selecting group.

We ask every candidate to record 2-3 classes for committee review, in addition to the visits.

Posted by: Howard Wasserman | Feb 16, 2011 3:20:07 PM

Although I can imagine #6 being time-consuming, I can see some advantages, too, in that you can perhaps get a better idea of how the students themselves understand the evaluations and what they thought about the class. I know that, in my own case, while I have not found looking at the numbers especially enlightening or useful in improving my teaching, I have tried to make changes, and have hopefully improved, based on the written comments. They also might help supplement the solicited evaluations you ask for in a helpful way in that I suspect that in most cases students will feel some pressure to be positive in the solicited evaluation (and perhaps even more so in a personal interview) in a way that they might not in anonymous comments. (Of course, that might also lead them to be flip or spiteful in the evaluation comments, too.)

Posted by: Matt Lister | Feb 16, 2011 3:05:36 PM

Post a comment