« Is Fisher v. University of Texas a Precedent on Jurisdiction? | Main | Video does not prevent "another Ferguson" »

Wednesday, December 03, 2014

Two More Reasons the Law May Concretize

In my last post, I raised the admittedly speculative possibility that advances in artificial intelligence will lead the law to concretize (by which I mean that it will become more clearly expressed and more transparently applied). I gave the example of autonomous cars which may lead manufacturers to push for more concretized speed limits (unlike the ones we have now in which it's unlikely you'll get ticketed if you travel a little above the speed limit). Superficial appearances aside, actual speed limits are neither clearly expressed nor transparently applied.

Let me offer two more reasons why the law may concretize. First, the law may become more concrete as computers play a larger role in making legally relevant decisions. For example, a group of German researchers is working to develop a computer system “to make automatic decisions on child benefit claims to the country’s Federal Employment Agency . . . probably with some human auditing of its decisions behind the scenes” and is in talks with the agency about how to deploy it. One researcher “hopes that one day, new laws will be drafted with machines in mind from the start, so that each is built as a structured database containing all of the law’s concepts, and information on how the concepts relate to one another.” In other words, when legally relevant tasks are performed by computers, legislation may itself be crafted more algorithmically to facilitate processing. That is a kind of concretization although whether or not such laws are clearer than current laws may be a matter of taste (and of whether you’re a human or a computer).

Second, the law might concretize by creating greater pressure to clarify the theoretical underpinnings of the law. For example, many copyright holders already use automated software systems to scan the Internet looking for copyright violations. Some users make constitutionally protected “fair use” of others’ copyrighted material, but it is difficult to know precisely what constitutes fair use. Before the Internet age, copying audio, visual, and written materials was more difficult, so there was less need to police violations. Furthermore, it was more expensive to police each violation when you could not simply search for violations on the Internet. Thus, fair use determinations were made less frequently. In the Internet age, such determinations are made much more frequently, and there is more political pressure to understand the principles underlying fair-use doctrine in order to make the law more concrete.

In the future, such pressures may apply to some of the most central questions facing moral and legal philosophers. Consider the tricky theoretical issues that underlie the famous trolley thought experiments: A runaway trolley is heading toward five entirely innocent people who are, for some reason, strapped to the trolley tracks. If the trolley continues along its current path, all five will die. You can flip a switch to divert the trolley to an alternate track, but it will still kill one innocent person strapped to the alternate track. 

This trolley problem and its numerous variations raise interesting questions about when it is mandatory or permissible to take an action that will save several lives, when the action will also knowingly cause the death of some smaller number of people. There is no consensus solution to all trolley problems. Nevertheless, autonomous agents, especially unmanned military drones, will likely be confronted with real-life trolley problems. We will want these entities to follow rules of some sort, but we cannot program those rules unless we agree on what they should be. It is possible that we will have different rules for humans and nonhumans, but we will at least have to codify some rules for autonomous machines that will require more theoretical clarity and agreement than we have today.

Of course, humans already face trolley-like situations from time-to-time, and we still do not have clear rules to follow. The difference, however, is that after humans have confronted an emergency situation, there is usually quite a bit of uncertainty about what they knew and when they knew it. With autonomous entities, we will know more precisely what information the entity had available to it and how it was processed. Indeed, we will typically have video footage of the pertinent events, along with all of the other data available to the entity. Being clear about the rules is more important when we can no longer hide behind ambiguous facts.

(This post is adapted from "Will There Be a Neurolaw Revolution?".)

Posted by Adam Kolber on December 3, 2014 at 09:57 AM | Permalink

Comments

Thanks, Matthew! In the post immediately before this one, I do mention the bit about Google allowing speeds up to 10 mph above the speed limit in its test cars. It certainly pushes against the concretization idea. But at this point, Google is just testing these cars and needn't lobby for a nationwide change in the regulation of driver speed. Google is not (yet) releasing an armada of cars going above the speed limit. Before they would do such a thing, though, I suggest that they will at least try to get more concretized speed limits to govern self-driving cars.

As for cars exercising the "judgment" to go above the speed limit, I do think that's potentially within the ken of autonomous cars. Sometimes cars will be better at it than us (e.g., they see and react to something before we can or have statistical data the average driver does not). They are also more patient than the average driver. Other times they may be worse at it than us. I'm speculating, but I suspect autonomous cars are not so far off from surpassing the average driver at maintaining safe driving speeds.

Posted by: Adam Kolber | Dec 4, 2014 3:21:27 PM

Thanks for your response, Adam. I think I understand that you're saying that eventually the world will look very different, and in that world, you are speculating that our laws will be much more concrete.

In any case, it seems like the world you're talking about is further off than the world in which some cars are self-driving, which is coming very soon.To your point about speed regulation applying to car/software manufacturers, it's been widely reported that Google's self-driving cars are being programmed to allow them to speed. see, e.g., http://www.bbc.com/news/technology-28851996

Perhaps Google thinks that it will be able to successfully argue that speeding is safer in some instances. But a very concrete, 65mph speed limit at all times would seem to suggest that Google would be unable to make that argument. Once you start getting into questions of whether or not it was acceptable to speed in a particular instance because of specific factors about that instance, we're talking about judgment. I suppose very advanced AI might eventually be trusted to make those judgment calls for us. But that seems much further off than self-driving cars hitting the roads in large numbers.

Posted by: Matthew Bruckner | Dec 4, 2014 10:22:58 AM

Thanks, Ken, for the link! I could imagine we're a long way off from computers that resolve legal issues empathically! But there's lots of concretizing that can happen before then. In an example first mentioned to me by Jennifer Mnookin, as technology emerged to better measure blood-alcohol levels, statutes changed to specifically prohibit driving with certain blood-alcohol levels rather than just, say, driving recklessly. I imagine more such opportunities to concretize are arising all the time.

Posted by: Adam Kolber | Dec 3, 2014 5:04:06 PM

At the risk of shameless self-promotion, I just published an article with this premise as a thought experiment. My conclusion was that no computer can ever deliver what the law needs: credibility because of empathetic reasoning.

Here is a link to the SSRN page where you can download the article:

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2338746

Posted by: Ken Chestek | Dec 3, 2014 4:19:12 PM

Thanks, Matthew! I have a pretty solid feeling that advances in AI will lead to systematic changes in the ways that many laws are crafted and applied. (I suggest they will concretize, but I admit it's pretty speculative.) To say that there will be systematic changes, however, does not deny that many other forces will continue to be at play. So I absolutely agree that political will, social norms, and much more help explain why the law is not currently as concrete as it could be, and those forces are not disappearing. (Indeed, I'm told the force awakens: http://en.wikipedia.org/wiki/Star_Wars:_The_Force_Awakens.)

More to your example, it makes sense that, AI or not, we would want a little leeway in the rule or in the enforcement of the rule to account for equipment calibration. We may also add some slack simply because people are used to the traditional norm of discretionary enforcement of speeding laws. But I could certainly imagine the norm softening as calibration improves and as the police or the public have greater access to the speed data presented to the driver (if that happens). And when cars are fully autonomous, it's not clear why people would receive penalties at all for the speeds of their vehicles unless they can and do override the vehicle's default speed. Indeed, part of what motivates my comments is the view that speed regulation will start applying more to car/software manufacturers than to individual car occupants.

And let me repeat my thanks for your thoughtful comment.

Posted by: Adam Kolber | Dec 3, 2014 12:34:23 PM

Hi Adam,

Thanks for the interesting posts. I'm curious why you think that laws are not currently more "concrete"? In your opinion, is it simply a lack of artificial intelligence? For myself, I believe it has to do, in part, with political will and social norms. Although I will admit that technology may play a greater role in the future. For example, speed cameras are widely decried by some members of the public and politicians as a "money grab." Norms exist that allow a moderate variance from the posted speed limits and speed cameras set to enforce the posted speed limit are contrary to those norms. In addition, some variance seems acceptable to account for calibration issues with people's cars or with the speed cameras.

To be more concrete (pun intended), let's assume that the law concretizes around a strictly enforced speed limit of 65mph (instead of 55mph with some undefined amount of generally accepted variance). In that case, I need to trust that my car's speedometer and cruise control are accurately calibrated in order to attempt to drive exactly at the speed limit. In addition, I need to trust that the speed cameras that might catching me going 66mph are accurately calibrated and won't send me a ticket when I'm actually driving 65mph but the camera thinks I'm going 66mph. In both cases, public trust of the relevant technologies seems imperative. In fact, we need to trust these technologies (and the AI driving them) so wholly that it changes social norms. Is that right?

Posted by: Matthew Bruckner | Dec 3, 2014 11:15:04 AM

The comments to this entry are closed.