« Is "Intellectually Vacuous" the Right Expression for Veil-Piercing Doctrine? | Main | Ballooning Statutory Damages in Copyright Law: Part 1 of 3 »

Wednesday, February 23, 2011

Chinese Rooms and Partner Pay

The Wall Street Journal today spanned the continuum from the deep and mysterious hard question of consciousness to the Mammon of Big Law Firm partner hourly rates.  The former came in an op-ed from John Searle, using his famous "Chinese room" image to explain why Watson computes but doesn't think.  The latter came in an article about the uncapping of top fees from the self-imposed $1,000 limit at many firms.

I was a partner in a big Midwestern law firm in the late 80s and early 90s, and I don't think my compensation ever overtook what first year associates in New York were making (or it was close).  I leave it to my friend Bill Henderson to do the definitive empirical work on this, or to tell me how it's changed, but the big difference in those days was leverage - big Midwestern firms ran a ratio of "equity" partners to associates of just about 1:1; in New York it was 4:1 or 5:1.  I read somewhere that firms like DLA Piper now pay top partners as much as $6 million, with a 9:1 gap between the lowest and the highest.  That would still leave a "low-paid" partner at the paltry sum of $666,666, which, I suspect, means the partner is still what I would call a partner.

What do I mean by that?  I recall sitting in a partners' meeting years ago, thinking that only some of us (and not I) were really partners.  Even though we were equity partners, the real question, it seemed to me, was whether we were net givers or net takers.  With low leverage ratios, it had to be the case that some percentage of the partners were still the source of profit to other partners (i.e. they made less than what they brought in from their hourly rates, even accounting for overhead).  It was the worst of both worlds, because of what was, in effect, reverse leverage.  Unlike the associates, your compensation wasn't guaranteed.  Instead, if times were not so good, you helped subsidize the lost income to your genuine partners!

Posted by Jeff Lipshaw on February 23, 2011 at 08:16 AM | Permalink


TrackBack URL for this entry:

Listed below are links to weblogs that reference Chinese Rooms and Partner Pay:


Consciousness is a deep, complicated philosophical problem. Turing helped factor it into subproblems, isolating the insoluble ones from the difficult ones from the ones where we can easily say something meaningful. The Chinese Room makes a confused jumble out of everything again.

As for many-worlds, it "explains" quantum mechanics only by invoking an even stranger, harder-to-explain, and less coherent set of phenomena. It doesn't actually give a meaningful description of what a measurement is even though measurement causes the entire universe to branch.

Posted by: James Grimmelmann | Feb 26, 2011 12:14:06 PM

I think James' frustration comes from the fact that Searle--who is a great philosopher by the way--has been trotting out this same argument for 20 years. And he's been doing this despite the fact the argument has been repeatedly refuted.

Posted by: anon | Feb 23, 2011 4:47:26 PM

James, I find it interesting that you are so worked up about Searle's argument. The argument has always seemed fairly innocuous to me--and not uncontroversial. My impressions are two decades old, but the SEP seems to confirm that "there have been many critical replies to the argument." So it may be premature to describe it as an "achievement" or credit it with much practical impact, positive or negative, given that the jury still seems to be out.

In any event, the whole point of the argument, as I understand it, is not to produce something useful, it's to figure out what consciousness is. Computer scientists can figure out what, if anything, you can use it for. I also don't see why the fact that it makes previous conceptions of consciousness mushy or unclear is necessarily a strike against it. Consciousness is clearly a difficult problem on anyone's view--even neurologists, if I understand correctly, are unclear about how various activities in the brain combine to give an internal perception of unitary consciousness. Surely that's not dangerous.

I myself don't completely buy the argument. I think it works at a very simple level, thinking of someone following rote instructions. But that doesn't mean there aren't other ways of programming a computer that wouldn't result in something we could call consciousness. It seems to me that at some point, well above mere rule processing, there is going to be some sort of phase transition that it's difficult to imagine based on our current understanding of both computers and brains.

And what's wrong with "many worlds"? Another argument I don't really buy, but again, I wouldn't have thought that it mattered much.

Posted by: Bruce Boyden | Feb 23, 2011 3:45:25 PM

The Chinese Room argument is one of the most significant negative achievements of philosophy in the last half-century. It representes a very real decrease in the net level of understanding of our world. Turing took the question of "can machines think?" in its raw form and showed that it is not really answerable, even in principle. Instead, he replaced it with the most meaningful question we are capable of saying something about, "Can machines communicate indistinguishably from humans?" The machines of his day could not; those of our day can only in the most rudimentary ways. But any attempt to claim that machines can never communicate effectively would apply equally to the question, "Can women communicate?" or "Can men communicate?" Purposeful, dialogic, symbolic, abstracted communication is so central to human nature, and subsumes so much of what we do and claim to value in humanity, that anything about "thinking" that this revised version of the question misses is not something we will ever be able to give convincing philosophical or moral weight to.

Searle's Chinese room takes this very useful reformulation of the relevant question and pitches it out, on the basis of an intuition that a black box that fluently answers questions in Chinese cannot "know" Chinese unless it contains a human who knows Chinese. It shoves the philosophically unanswerable problem of consciousness back into the empirically testable question of effective communication. It takes us back to the days of looking for the unobservable traces of the spiritual mind in physical brains, like Descartes with the pineal gland. And the structure of argument it gives, by driving such an extreme wedge between communication and thought, lends itself to arguments that women, or Africans, or other groups do not "think" because they are missing some ineffable essence. There are plenty of other bad philosophical zombie arguments, but Searle's is among the worst, and the most dangerous.

I would also put the many-worlds interpretation of quantum mechanics on my list of negative achievements, and will have to think about others.

Posted by: James Grimmelmann | Feb 23, 2011 10:00:46 AM

The comments to this entry are closed.