« A Jot on Gedicks, Helfand, and the Value and Limits of "Free Exercise" Doctrinalism | Main | Rejection Time »

Monday, April 11, 2016

Reality Checks

Over the last few years, I've taken to writing about emerging tech and criminal law.  As a childhood fan of science fiction, it's fun to get to think about technologies that are similar to those I read about as a kid.  In particular, I have a blast thinking about how the law will or should handle what I predict will be very-near-future technologies.  So, for instance, I've written about algorithms taught through machine learning techniques to identify individuals who are likely to be presently or very recently engaged in criminal activity (e.g., an algorithm that says that that guy on that street corner is probably dealing drugs, or that this on-line sex ad (and whoever posted it) is probably related to human trafficking).

At the time I wrote the piece, there were no algorithms that exactly fit what I describe.  There were computer systems that identified individuals in real-time as they engaged in activities that human operators had already decided correlated to criminal activity, and there was research ongoing using machine learning to identify activities that correlate to criminal activity, but no one had put the two together.  As I saw it (and perhaps it is the sci-fi fan in me), it was just a matter of time before the two came together to create the kinds of algorithms I discuss.

A source of frustration for me when I presented on the topic, then, was that inevitably one of the first questions I'd get would be whether the technologies I discussed really exist.  I'd explain what I just said in the prior paragraph, but nonetheless I'd feel defeated in some sense, like my legitimacy had been undermined.  And I can see many reasons for the questions: curiosity, to understand the technology better through an example, and skepticism about the validity of discussing something that doesn't exist, to name a few. 

But the questions still bothered me.  And they got me thinking:  To what extent should we talk about the legal implications of things that we believe are about to happen, but which haven't happened yet and therefore may never happen?  What is our obligation as scholars to prove that our predictions are correct before engaging in legal analysis?  Is this obligation higher in some areas of law, like criminal procedure, that traditionally have not been consistently forced to adapt to technological developments, and lower in areas of law, like intellectual property, that have?

Posted by Michael Rich on April 11, 2016 at 12:16 PM in Criminal Law, Science | Permalink


"Some in government are testing use of the tool but it hasn't been used in any cases."

Orin, you state that this is not of concern. I would say that when the government is using it, even as a test, it is now reality. And it can transition to being used in a case at any moment.

Posted by: Barry | Apr 15, 2016 7:43:32 AM

Thanks for the great and provocative comments. One thing I take away from them is that "new technology" is not a homogeneous concept, and the lack of homogeneity exists along a couple of axes: there's both a timeline for a specific tool, as Orin articulates, but also a question of whether the tool is simply an application of existing technology in a new area or the application of novel technology. In the former case of the second distinction, it can be easier both to make the case that it will be applied in the new situation and to describe what that application is likely to look like. Regardless, this has been excellent food for thought as I try to weigh how much time to spend answering initial factual skepticism and uncertainty, both in presentations and in writing.

Posted by: Michael Rich | Apr 12, 2016 12:34:48 PM

I believe your question depends on who your primary audience is. You received questions that made you uneasy when presenting your paper. I assume you presented the paper in an academic setting. Since many law and technology articles spend a fair amount of time explaining a current problem (usually a new technology somewhere between 4 and 6 on Orin's scale), an academic audience would expect you to explain a current problem in detail before moving on to any legal implications or policy suggestions.

On the other hand, I agree with Marcelo and Michael that oftentimes the most useful time to critique new technologies is earlier on, when the relevant political and business decisions are being made. If you are writing primarily to try to affect policymakers, then I think you should stick to writing about technologies that are still in development, and not worry about receiving slightly skeptical questions from academics.

Posted by: Rebecca Lipman | Apr 12, 2016 10:05:33 AM

I tend to be on the Marcelo/Michael side here, particularly in cases where the future path of the technology is fairly perceptible. Your example of predictive policing, for example, has a lot of low-hanging fruit---it's just an application of well-known techniques to problems with plenty of data---it would be surprising if police departments couldn't make better-than-chance predictions about some categories of criminal perpetrators. So I can't see how the complaint "too speculative" would fairly apply to someone who has gotten out ahead of the actual implementation.

Also, when we're talking about dangerous uses of technology, I think there's more reason to be willing to risk speculation, just because sociologically, it's harder to tell people to stop using something they've started to implement than it is to tell them to not implement it in the first place. If investigating some near-future technology leads to the recommendation that it be stopped, then it could be important to get out ahead of the tech.

(My current example here would be widespread adoption of centralized kinds of technological violence, especially domestically, like domestic drone policing. That's probably pretty far off and a bit speculative, but if the consequences of that to the rule of law might be horrible---and there are reasons to think it might be, then the precautionary principle indicates that we should be looking very carefully at it.)

Posted by: Paul Gowder | Apr 11, 2016 8:38:32 PM

An addendum: I think the answer to the question may also hinge on whether you're writing an "illuminating thought experiment" article or an "important real problem" article.

In an "illuminating thought experiment" article, you're talking about how a hypothetical set of facts raises important legal problems that show something interesting about the law outside that particular thought experiment. It doesn't really matter if the hypothetical becomes real, because you're interested in what the hypo teaches us about law more broadly. On the other hand, in an "important real problem" article, you're saying that the law needs to deal with a particular issue and proposing a way to do that.

As I see it, it's much easier to write an "illuminating thought experiment" article about a technology that is science fiction (step 1 above) than it is to write an "important real problem" article. If the idea is presented as a thought experiment, step 1 is fine; if it's presented as real problem, the objection that it's not a real problem and may never be is a substantial objection.

Posted by: Orin Kerr | Apr 11, 2016 3:07:39 PM

I have a somewhat different view than Marcelo and Michael. I think the concerns some have raised about writing on technologies that don't exist are fair objections.

Consider six possible steps of technological development for government use of a new tool:

1) The tool is science fiction.
2) The tool exists in the lab, or as a proof of concept.
3) Some in government are testing use of the tool but it hasn't been used in any cases.
4) Some in government are actually using the tool, but it is still rare and not in use in important cases.
5) The tool is being regularly used in important cases, but not in routine cases.
6) The tool is being regularly used in many cases.

My own view is that it's usually too speculative to write about the legal implications of the tool in steps (1) to (3). If we don't know the technology exists or will work, it's too easy to get the likely facts wrong. And if we can't understand the likely facts, it's hard to know how the law should apply. This doesn't mean you have to wait until step (6). If you can see at steps (4) or (5) how the technology works and where the trends in technology and its usage are going, that can be a good time to write. But (1) to (3) strikes me as too speculative a stage. YMMV.

Posted by: Orin Kerr | Apr 11, 2016 2:52:31 PM

I read this as, "to what extent should we anticipate problems, or should we resign ourselves to always playing catch-up with technology?" Which to me is a question that answers itself.

When you consider that legal considerations can alter design, e.g. choosing alternatives that avoid legal problems when available, it seems even more obvious that thinking ahead is often valuable. That has certainly been one of the driving forces behind the We Robot conference, http://robots.law.miami.edu/2016.

Posted by: Michael Froomkin | Apr 11, 2016 1:41:47 PM

I'm a data scientist, not an scholar (and much less a legal scholar), but from that angle, my observation is that in many case (1) this kind of algorithm is written once it has been imagined, not before, and (2) it's applied before it works (as advertised), not after. E.g., a lot of the algorithms applied by law enforcement and the military are, when looked at under the cold light of leaks, rather underwhelming compared with their stated power, which in some sense gives them their legitimacy.

So I would say that, at least in this area, studying the legal implications of not-yet-existing algorithms is crucial, because in many cases that's when political and business decisions about them are made. Waiting until we see whether they work or not means waiting until they are effectively deployed (that's another aspect of the timing issue), and it's certainly better to have some discussion about them before that.

Posted by: Marcelo | Apr 11, 2016 12:55:53 PM

The comments to this entry are closed.