« Vertical Stare Decisis | Main | JOTWELL: Smith on Litman on remedial collapse »

Wednesday, July 31, 2019

Humans Out of the Loop?

As I mentioned a couple days ago, this summer's project has been a reflection on what humans and machines are likely to be able to bring to the lawyering party.  One of the "pro-algorithm" themes out there in the literature is the synergy between "computational law" developments and the insights of the "heuristics and biases" behavioral psychology, of which Daniel Kahneman's work is among the most notable (and popular).  To quote Thinking Fast and Slow, “Whenever we can replace human judgment by a formula, we should at least consider it.” (p. 233.)

Michael Livermore (Virginia) has a nice little essay about the possibility of computationally self-executing legal rules, notwithstanding the famous jurisprudential debates about the “open texture” of language.  Can natural language processing (NLP) and artificial neural networks (ANNs) get to the point where humans trust a computational system to draw conclusions about things like what it means to be a "vehicle" that is prohibited in the park?  (The system would be given lots of pictures of things that could conceivably be "vehicles" and would be trained to use activation functions and weights to learn what a "vehicle" is within the meaning of the statute.)

But I digress into substance when I really want to talk about the outtake.  Professor Livermore uses the phrase "the dream of removing human beings from the loop of legal reasoning." For me, what immediately came to mind was the view of the noted cyber-technologist and DOD consultant, John McKittrick, on the same subject in connection with launching ICBMs: “You can’t screen out human response! Those men know what it means to turn the keys, and some are just not up to it! Now, it’s as simple as that! I think we oughta take the men out of the loop.”  As we know, he prevailed in that view, bringing the world almost to the brink of destruction, to be saved in the end by Ferris Bueller's doppelgänger: 

Posted by Jeff Lipshaw on July 31, 2019 at 08:00 AM in Lipshaw, Odd World | Permalink

Comments

Right now the closest things we have to self-executing law are the blockchains with Turing-complete scripting like Ethereum. For simple matters that don't rely on much external information, they seem to work reasonably well. Allowing an escrow agent to complete a transaction, or reverse it, without having to trust the agent not to abscond with the funds, is old hat now. Atomic transactions, A delivers to B and B delivers to A, where it is cryptographically impossible for one to happen without the other, makes it possible to trade without counterparty risk. Bitcoin, of course, is quite famously hard-coded to make arbitrary inflation impossible.

Much harder is to interface such systems with factual information from the outside world, the "oracle" problem, in computer parlance. An escrow agent may be blocked from theft, but may still be bribed by one of the parties. And when code is law, the aggrieved party can't petition to reverse the transaction. The fact that I can mathematically prove that I paid my rent doesn't help much if the landlord is claiming behavior (other than non-payment) in violation of a lease, and so on.

A more subtle problem is that the current political economy tends to resist simple rules universally applied. Part of that is a safety against unintended consequences, and part of it less commendable. The US Dollar *could* operate just like Bitcoin, but governments want central-bank money for their spending habits, large institutions want insulation from financial shocks, and law enforcement wants banks to do their investigative work for them on a mass scale via KYC/AML.

In the long term, I would expect standard "boilerplate" libraries of smart-contract code to evolve as the bugs shake out and their behavior becomes widely understood and tested, and then common practice will be to do things in a way that provide them the necessary oracle information. That will begin a process where the automated mediation of human commerce starts to build out.

Posted by: M. Rad. | Aug 1, 2019 2:25:47 PM

By the way, speaking of which:

" Calls for an AI to be credited as an inventor "

" Two patent filings seek to set a precedent by naming an AI as their inventor"

Here:

https://www.bbc.com/news/technology-49191645

Posted by: El roam | Aug 1, 2019 10:49:28 AM

Just illustration ( speaking of Miranda, silence, and the fifth amendment ). Very simple one ( extremely simple relatively ):

In an appeal ( ninth circuit ) we deal with suppression of evidence ( drug trafficking ). What happened there:

One suspect questioned. Yet, let's notice, how versions or positions change:

First, Miranda read to the suspect, and he was willing to speak without presence of counsel. Later, while facing accusations, all of a sudden, the suspect invoked his right for counsel. Then : the questioning stopped at once. But hey, suddenly, I quote ( Luna Zapien, is the name of the suspect):

At some point after providing answers to Officer Ramirez’s questions concerning biographical information, Luna Zapien told the officers that he wanted to give a statement regarding drug trafficking. The agents immediately reminded Luna Zapien of his constitutional rights and told him they did not want to ask any questions because of his earlier request for an attorney. Luna Zapien said that he understood those rights, he wanted to waive them, and he wished “to speak to [the agents] without the presence of an attorney.” It was only after this exchange that the agents asked about his participation in drug activity and that he admitted selling drugs. Luna Zapien told the officers that he had been involved “in making phone calls and meeting with an unknown [H]ispanic male, and that he did sell narcotics.”

End of quotation:

Now look. One can't understand it ( prima facie ). First you talk. Then you ask for a lawyer. Then interrogation is stopped. Then, out of no where, the defendant wants to confess ( without any coercion ). Then he would ask the court for suppressing evidence or his confession actually. This is crazy in mental rational human terms. But, this is also trivial occurrence. Absolutely trivial.

But hey, there is also an issue of " booking exception". It is forbidden to interrogate without counsel ( if invoked ). Correct. Yet, there is the booking exception ( technical details, name,age and so forth... sort of clerical processing).Yet, there are exceptions to the exception ( of booking) and it is among others, I quote the negative finding of the circuit here:

No factual findings by the district court or evidence suggest that the agents “played upon” Luna Zapien’s “weaknesses” or “knew that [he] ‘was unusually disoriented or upset at the time.’”

End of quotation:

So, how an IA, would asses, not only crazy changing of positions of suspect. But as mentioned, that agents " played upon " or that the suspect "was unusually disoriented or upset", and not playing by himself pretending to be so ( disoriented or upset ).

This is very trivial case. Illustrating, how far we are. How AI, is far from such capacity to evaluate human " crazy behavior " or delicate and complex as such.

To the ruling:

http://cdn.ca9.uscourts.gov/datastore/opinions/2017/07/03/14-10224.pdf

Thanks

Posted by: El roam | Aug 1, 2019 5:42:40 AM

Peter, there is no need here to be confused at all. For, it is not a legal discussion, but rather : philosophical and mental. Strictly, the human nature ( even so, this is not exactly what the fifth provides ). But let's simplify it, just for the sake of the discussion:

Suppose that you are a police investigator. It is up to you, to push harder the suspect, or let him go ( generally speaking ). At the outset, the suspect stay silent. It is only natural, that it does raise more suspicion in your eyes as a police investigator ( notwithstanding legal implications of it right now ). If later, the suspect talks, you would have to re evaluate, and in human terms, what is the reason for that change. Is it reasonable? And or course, the mental basis for such change in his standing or change of conduct and position.

But, that was a "primitive" illustration.You can encounter cases where, even inherent feelings of guilt, would cause a suspect, to confess, for something, he has never done. Now :

What I have claimed, is that we are very far from it. AI, wouldn't do it at that stage. Can't do it. Can't have human touch to such degree. This is undeniable. It is too complicated for AI. We are far from here. Maybe it the future. Very implausible right now.Mathematical/ analytical input and output, wouldn't do.

And you under estimates judges. They are capable of far greater more objective assessment than what you do attributes to them. In fact, it is " built in " within the system. The protocols, the rules, the law, compel judges to objective assessments ( beyond their mental capacity ). Sorry to tell you.But this is very "renown" myth.One needs to read and understand, huge amount of rulings, in order to reach such conclusive evaluation. I have done it ( more than huge amount, and from all over the world ). No AI, would do it. Can do it. AI at that stage, can offer raw processing of big data. But, for cutting things. Things delicate, complex, mixed with very complex issues concerning human nature and crazy behavior, well,that's impossible. Maybe later we shall put some lengthy illustrations.

Thanks

Posted by: El roam | Aug 1, 2019 4:32:52 AM

Unfortunately, the greatest advantages of robot judges are also their greatest weakness.

The real problem for their adoption is that even if we bring their error rate so low that at a particular task they are far better than a human we can reliably and repeatedly identify where they are vulnerable to wrong answers.

No matter how irrational and subject to fallacious reasoning a person is you can't prove it made a difference in any case. Each time you gave an AI judge a software update you could run all the past cases through and see if the outcomes changed.

Posted by: Peter Gerdes | Aug 1, 2019 2:27:58 AM

@El roam, First, I'm a little confused by your example. It very much sounds like you are contemplating holding the suspects silence against them which the 5th amendment forbids.

But I think your considerations actually bring up two positive points for robot judges.

First, I take your worry to be that there are all sorts of aspects of commonsense human experience that are relevant to judicial decisions and that you don't expect AI judges to be able to manage this. I think that's exactly wrong. I mean you might be skeptical of general AI and that's reasonable but if we get it there is no reason to think it will be like the stereotypes in film. Much more likely to learn much like a baby...simply by taking in huge amounts of data and putting it together. The computer would be no worse off than a blind judge who has to rule on claims made by sighted witnesses (but can use all prior talk about sight she has hear to inform that call).

In fact this is an advantage. Often people reason by trying to ask what they would do in that situation but the problem is the other guy isn't them. Maybe they would never do that if they were guilty but that doesn't actually say anything about how rare are the people who would.

More generally, just matching the degree of reliability of human judges/juries shouldn't be too hard. We are infected with all sorts of reasoning biases which make us super bad at these jobs. First, we can't actually ignore information that shouldn't affect the outcome. Judges can't actually screen off information about how sympathetic the defendant is or what they did when they interpret what the statute 'no vehicles in the park' means which leads to lots of bad law. We can literally erase formally irrelevant info from the system before asking the computer to evaluate the claim.

Also, it would make questions about harmless or non-harmless error much easier to check. You can literally just run a bunch of scenarios and see what the software would have said if certain evidence was admitted or suppressed.

Posted by: Peter Gerdes | Aug 1, 2019 2:24:03 AM

Interesting, but we are very far from such goal as the so called here, automatic law, or computing law.Because if the goal is as described, to eliminate human errors or bias, then it is simply impossible right now. For what counts finally, is to prevail in certain case. To exercise judicial discretion and prevail finally. But for that, strategic issue must be handled:

And it is, can one prevail and exercise discretion, without having human experience ( as a whole, and very intensive one ). It is typically integral feature in too many cases:

Suppose that one suspect, exercise his right to stay silent. Being silent may per se implicate him. Raising suspicion about his involvement in a crime. Yet, if later, he brakes his silence, and talk, and explains why he at the outset, had kept his right to stay silent, he may be redeemed ( by providing reasonable explanation later ). Yet:

One judge, needs to analyse, whether the explanation is reasonable ( and in human terms):

Suppose, by not staying silent, the police officer investigating him, could realize that he had cheated on his wife. But the judge, needs in retrospect, to analyse, whether in the given circumstances:

That was a reasonable reason for staying silent. Suppose it is petty offence. Not so reasonable. Suppose severe crime, more reasonable. All that must be integrated from chaotic human perspective. That is to say:

That the main issue, is not only technical one ( in that, computation is more realistic and reasonable ) but:

Whether, computer can exceed human beings, in human terms.And we far from it.

We won't stay young here no more.....

Thanks

Posted by: El roam | Jul 31, 2019 9:14:06 AM

The comments to this entry are closed.