Tuesday, February 21, 2017
Deus et Machina - A Response to the Susskinds (Mostly) and Hadfield
My friend and dean Andy Perlman beat me to the punch with the Yogi Berra-ism about the difficulty of prediction, particularly when it’s about the future. I had the chance to dig into the two books under discussion here – The Future of the Professions, by Richard and Daniel Susskind, and Rules for a Flat World, by Gillian Hadfield. But the stars also aligned to have Richard speaking a few days ago in the Harvard Law School Center on the Legal Profession’s Speaker Series, down the street from my house. And it happens that I’m speaking on my book, Beyond Legal Reasoning: A Critique of Pure Lawyering, in the same series on April 4.
There is a connection to all of this, and it has to do with a certain kind of prediction, particularly one that involves any conversation about artificial intelligence, thinking, and consciousness. It is more sophisticated, I think, than arguing about God, but just as unresolvable. Here’s what I mean. My next-door neighbor in Cambridge, David Haig, is a leading evolutionary biology theorist. From time to time, we engage, usually accompanied by an adult beverage, in conversations about the so-called “hard question of consciousness” – i.e. whether there is a reductive scientific explanation of one’s unique sense of inner experience. It is a subject still out there at the edge of science and philosophy. Not only has it not been resolved, but it has engendered some gossip-column-worthy instances of philosophers behaving badly.
My friend David tends toward the side of the argument that there will be an explanation; I, on the other hand, have a hard time seeing how science gets around the built-in paradox. Both of our views hang on an unprovable belief about the future, and to a significant extent, it’s a trivial problem. When and if somebody comes up with the knock-down scientific (i.e. falsifiable) theory of inner experience, I will gladly tip my hat and acknowledge my prediction was wrong. Until then, it simply stays unresolved.
A few minutes before noon at Harvard, Richard was by himself waiting for the audience to show, so I introduced myself. I told him (with a fair amount of chutzpah, given that he’s Richard Susskind and I am, well, just me) I still couldn’t decide if what he was saying was profound, on one hand, or obvious and trivial, on the other. He took that with good humor. I think it is beyond question, as Richard would agree, that technology will indeed replace everything that it is capable of replacing. As a case in point, while Richard was speaking (I confess), I was multi-tasking, using my iPhone to review a residential real purchase agreement for my son and daughter-in-law’s move to Cincinnati, prepared by the broker situated there, posted on an app called Dotloops, reviewed by me off of a mobile device in Cambridge, and then signed digitally (via Dotloops) by my kids in New Haven and Bridgeport, respectively, after a series of text messages that confirmed I was okay with it.
I’m still inclined to the obvious end of the continuum, mainly because I think Richard and Daniel, while writing a fabulously interesting book, and delivering a well-deserved kick in the pants to all the troglodytes, have begged two hard questions.
The first one has to do with their focus on the production, distribution and sharing of expertise. No doubt technology will continue to affect that. What is less clear is how people will continue to judge expertise. There is the paradox, the conundrum. If you yourself are expert enough to judge the expert, then the distinction between expert and non-expert has disappeared. That can’t possibly happen. But we still have to make choices based on our assessment of what experts are telling us, even though we aren’t experts. If the expert says that taking the new chemotherapy can extend my life by six months, but at a horrific level of ancillary misery, technology can’t make that choice for me. And then there are competing experts. My job as a general counsel used to involve having to decide what to do when two experts in different fields (say, real estate and tax) predicted different outcomes, one positive and one negative, from the same action. I mean, in either case, technology could make the choice on the basis of some algorithm that somebody else wrote for purposes of that decision, but I have to decide if I want to abide by the algorithm. At least as long as I am a human being and not a robot.
I don’t see that the book confronts, much less resolves, that issue.
The second begged question has to do with the identification of what a client wants. Richard had a nice narrative about this in his talk (it also appears in Section 1.8 of the book). He showed a picture of a Black & Decker drill, and said that new executives are shown this and asked “what do we sell?” The answer isn’t a drill. Richard clicks to the next slide, which is a picture of a precisely-created small hole in a wall. The point is that B&D sells the means to a completed hole, not the specific tool to get there. Again, I don’t see in the book that the Susskinds have fully confronted, much less resolved, what the analog to the “hole” is when we are talking about lawyers. If the hole is document review, yes. That is “lawyer as tool” that will be replaced by technology. But if the “hole” is something else – assurance? sympathy? the courage to face an uncertain future? – then I’m not sure that mere knowledge is going to do the trick.
The entire subject is fascinating, but it is Deus et Machina, God and the Machine, in the broadest sense. Trust me when I say that I am a God-skeptic, but I am a Machine-skeptic as well. The Susskinds and Professor Hadfield do us a service by invoking the historical, sociological, and philosophical contexts in which we are making these predictions about the relationship of humans and their thinking machines. I flipped to the index in both books to see if any of the authors had cited the German sociologist Ferdinand Tönnies, who described the broad social movement in modernity from Gemeinschaft (community) to Gesellschaft (organization or society). Or the historian Thomas Haskell, who reflected similar themes in his history of the development of the modern professions in the late 1800s and early 1900s.
The point is that both books (and their authors) are working in the modern or Gesellschaft paradigm. Rules, in the sense of either algorithms (Susskinds) or regulation (Hadfield), are relatively modern, rational, cold, arms’-length, specialized, technological, professional devices. If there is a post-modern, it lies the counter-reaction to the technological, and that returns us to the question of choice or decision. Yes, an algorithm can decide, but it is because some independent agent has created it. Deus? Free will? Machina? Determinism? I really don’t think that we are going to resolve those questions now any more than we did five, fifty, or five hundred years ago.
The “hole” is deciding. The “hole” is choice. There may well be fewer “lawyer” jobs for those who help others decide or choose. But my prediction is that wisdom, as long as there is a choice to be made somewhere along the regress, won’t be replaced by an algorithm.
And thanks to Dan Rodriguez for letting me elbow my way into this discussion!
Posted by Jeff Lipshaw on February 21, 2017 at 01:00 PM | Permalink
The comments to this entry are closed.