Wednesday, August 02, 2006
Teaching and Scholarship, Part Quatre
There is presently going on over at the Empirical Legal Studies blog an on-line discussion of Ben Barton's paper on the relationship (or lack thereof if you buy Ben's conclusion) between teaching and scholarship. Commenters are Bill Henderson and Jeffrey Stake from IU. This has popped up in the blogosphere several times, hence Part Quatre or Cinq.
I know just enough statistics to be dangerous (see here and here), and probably haven't been around the academy long enough to earn the right to comment, but Dan gave me "full author" access, so I'm going to jump into the fray, and offer my usual array of well-chosen witticisms and observations below the fold.
UPDATE: More on this topic from Vic Fleischer at Conglomerate here.
1. One of the questions is whether the data on the dependent variable, teaching quality, which is derived from student evaluations, is reliable. This is not an unusual problem, and occurs whenever we seek to support a hypothesis with quantitative proxies for the non-quantitative attribute. What you end up with is a good measure of the proxies, and it's another question entirely whether the proxies have anything to do with the attribute. (Like do the indicia of independence of directors in Sarbanes-Oxley or the NYSE listing requirements have anything to do with directors acting independently? Are those indicia good proxies for intellectual or moral independence?)
Ben offers several reasons we should rely on the data, including a variant of the "street light" defense ("Why are you looking here for my lost ring when I dropped it across the street?" "Because there is no light over there.") My own reaction to personal evaluative data is based on years of looking at my own performance evaluations, other people's performance evaluations, reams and reams of so-called developmental "360 studies" on myself and others, collating hiring interview reactions, and talking to very, very good HR people about what they all mean. I conclude the data is qualitatively but not quantitatively reliable, but only after being scrubbed. What that means is when you analyze it, at least in the corporate setting (and that's how I approached my own student-generated teaching evaluations), you have to ignore the best and worst comments, and take single points of criticism or praise as marginally meaningful. Repeated themes (even if they are not independent) are very, very helpful for individual development, but difficult to use as bases for quantitative comparisons between people.
UPDATE: Bill Henderson thinks the data is better market information than most corporations can gather. See here. That is a fair point, or at least distinguishes the more limited data in a performance review. It's also a fact that not just presumably rational students rely on it; presumably rational appointments committee seem to as well. Or if it's not fully rational, it's a useful heuristic.
2. If the conclusion is correct, so what? Would a different finding change anything?
Vic Fleischer over at Conglomerate made an acute observation about
teaching (in this case, venture capital) that seems like CLE, and
teaching that invokes some semblance of scholarship. As a former
practitioner who got into the game first as an adjunct, that was exactly the problem I faced. I wanted to teach like a legal scholar, not like a CLE presenter. And my impression was that students knew when they were getting a "talking head how-to" presentation.
I think the issue has a lot more to do with the evolution of legal scholarship from its trade or professional school roots to its present status within the research university. Jeffrey Stake is onto something in suggesting there is a complex causal if not quantifiable relationship; but I suspect we'd like to believe in any event there is one. But if there isn't, so what? Most of our students seem to care far more about being prepared for the bar exam questions on contracts than whether contract law should be based in efficiency or the affirmation of moral promise. But that won't stop me from teaching the latter, whether they like it or not!
3. Here's what I'd like to see, not that I don't respect the empirical work being done. Instead of worrying about overall whether scholarship and teaching are related (because little will change whether they are or they aren't, and both are valued), I'd like to see us thinking about how to use scholarly issues to make us better teachers, or to face down the theory/practice and doctrinal/clinical divides.
TrackBack URL for this entry:
Listed below are links to weblogs that reference Teaching and Scholarship, Part Quatre:
» Is Law School Too Paternalistic? from Law and Letters
Law students are mature enough to know that whatever form the lecture/discussion takes (Powerpoint, classic lecture, or heavily Socratic/dialogic) there are also many ways to process the material (taking notes by hand, by laptop, or by making their o... [Read More]
Tracked on Aug 3, 2006 6:04:55 AM
I feel that Ben Barton has done a service to the academy. Regardless of how one interprets the data or its negative correlation, the study has and should generate discussion.
To that end I offer the following. Empirical studies do not measure talent. Some people are naturally gifted teachers or naturally gifted writers. As for having articles published in the top 20 law reviews or peer reviewed articles doesn’t topic and luck have to be factored in?
For example, it seems to me that in the period 2003 – 2006 an article, which addressed or analyzed the constitutionality of torture, or one dealing with terrorism probably stood a better chance of being published in those journals then an article on the legal regime of the Senegal River Basin or an issue involving Indian rights. Also, it appears to me that perusal of the top law reviews shows that environmental law articles have a harder time of being published in them. Hence the proliferation of environmental law journals.
Posted by: Itzchak E. Kornfeld | Sep 30, 2007 10:36:15 PM