« Adler on same-day audio | Main | Why Moral Risk Presents a Challenge to Retributivist Punishment »

Thursday, April 26, 2018

Predicting Legal Puzzles

New technologies offer puzzles for law professors trying to sort through established doctrine and traditional legal principles.  In the criminal justice space, new surveillance technologies offer endless challenges to ideas about expectations of privacy, police power, and associational freedoms.

If you write in the space, you take note of those scholars who have an almost prophetic (predictive) talent to see the future before anyone else does.  David Harris (Pitt) and the late Andy Taslitz always seem to write about problems in policing literally a decade before the issue hits the news and the rest of the legal academy.  In my early days, I literally did individual preemption checks to see whether Professors Harris and Taslitz had already written about my next new idea. 

Others – too numerous to name – have written about future problems only to see them become present problems facing us today.  As one part of this post, I would invite you (in the comments) to suggest legal academics who you think have this prophetic talent.  

After the break, I will talk about by own stumble into an accurate prediction.

Until now the “eureka, I’ve accurately predicted the future moment” hadn’t happened to me.  But, for the first time in my academic career the futuristic world in which I puzzled about has finally come into being.

In 2015 I wrote an article Big Data and Predictive Reasonable Suspicion, 163 U. Pa. L. Rev. 327 (2015) that had as its core theme that a Fourth Amendment doctrine built on a small data foundation couldn’t survive a big data world.  The article posited how a world of big data information about suspects combined with facial recognition would erode the Fourth Amendment’s reasonable suspicion standard on the streets. 

This Article traces the consequences of a shift from “small data” reasonable suspicion, focused on specific, observable actions of unknown suspects, to a “big data” reality of an interconnected, information rich world of known suspects. With more specific information, police officers on the streets may have a stronger predictive sense about the likelihood that they are observing criminal activity. This evolution, however, only hints at the promise of big data policing. The next phase will use existing predictive analytics to target suspects without any firsthand observation of criminal activity, relying instead on the accumulation of various data points.  Unknown suspects will become known to police because of the data left behind.  Software will use pattern-matching techniques to identify individuals by sorting through information about millions of people contained in networked databases. This new reality simultaneously undermines the protection that reasonable suspicion provides against police stops and potentially transforms reasonable suspicion into a means of justifying those same stops.  …

The wrinkle of big data is that now officers are no longer dealing with “strangers.” Even people unknown to officers can be identified and, with a few quick searches, revealed as a person with recognizable characteristics or about whom certain predictions can be made.  If officers view those individualized and particularized identifying characteristics— such as prior convictions, gang associations, and GPS coordinates near the scene of the crime—as suspicious, then otherwise innocent actions might create a predictive composite that satisfies the reasonable suspicion standard. In essence, reasonable suspicion will focus more on an individual’s predictive likelihood of involvement in criminal activity than on an individual’s actions.

It was a good piece that covered both the Fourth Amendment doctrine and explored new predictive technologies that had not been discussed in law review articles.  There was only one problem: police did not actually have facial recognition technology that would allow them to match up suspects with big data information.  The technologies existed (in theory) and were coming (I argued), but were not yet being considered by police departments. The premise of the article (one that I slightly elided at the time) was that you needed real time facial recognition to turn strangers into known suspects with big data.

Fast forward to this month, and the Wall Street Journal wrote about how police are partnering with companies selling facial recognition with artificial intelligence capabilities.  According to the WSJ, these technologies are coming to a street corner near you pretty soon.

Yesterday, the Washington Post wrote an article about the future of facial recognition in body cameras, quoting the CEO of Axon, the leading maker of policy worn body cameras saying, “It would be both naive and counterproductive to say law enforcement shouldn’t have these new technologies. They’re going to, and I think they’re going to need them. We can’t have police in the 2020s policing with technologies from the 1990s.”. 

And, today a coalition of civil rights groups wrote a public letter to Axon to push back on the harmful impacts of real time facial recognition.  The letter argues that certain products like real time facial recognition would be unethical to deploy. It also challenges the company to involve community groups in all of its ethical decisions.  The letter was written in response to the formation of an AI Ethics Board that Axon created to inform the company of the risks of these new technologies. 

Three quick thoughts on these developments.

First, the puzzle of how facial recognition technologies will distort suspicion is now teed up and ready for testing, and I hope others engage in thinking through the solutions.  The Georgetown Law Center on Privacy and Technology (Alvaro Bedoya, Clare Garvie, and Jonathan Frankle) recently published a groundbreaking report -- The Perpetual Line Up -- on the unregulated state of facial recognition technology.  There is much more to be said about the subject.

Second, I hope others see the value of creating ethics boards.  All too often lawyers are not in the rooms when the technology is designed.  Perhaps more companies should develop ethics boards (which would include both legal and moral ethics experts).  Putting lawyers in the room will avoid some of the obvious pitfalls that have increased tension between police and communities.  There are no easy answers here, but having critics in the room will only add value.

Third, it is fun being right about a prediction (I only hope the Fourth Amendment can survive). 

Posted by Andrew Guthrie Ferguson on April 26, 2018 at 10:49 PM | Permalink

Comments

In this regard , you may find great interest here also :

" Facial Recognition Software: Coming Soon to Your Local Retailer? "

https://thecrimereport.org/2018/04/23/facial-recognition-software-coming-soon-to-your-local-retailer/

Thanks

Posted by: El roam | Apr 29, 2018 7:22:32 AM

Interesting post , successful prediction indeed . I would only suggest , that the fourth amendment is here yet to stay . With the technology and big data etc…As well new concept of admissibility are formed and built up. Now , if in the old fashion world , even high probability of commission of an offense , by certain suspect , would lead finally to suppression of evidence , due to admissibility , then , surly with the new technology , emphasizing let alone ( as done by you ) the prediction of a crime , over the very commission of it . And indeed , courts are more and more sensitive , to that difference and game changer. More capability of predicting the crime , would enhance more or greater awareness to admissibility issues ( privacy , good faith , reasonableness and so forth… ) . I wouldn't be so sure .

Meanwhile , you may find I guess , great interest , in that report , of real use of face recognition , for tracing a suspect , among 60,000 persons in a show or concert ( In China ) here :

https://www.bbc.com/news/world-asia-china-43751276

Thanks

Posted by: El roam | Apr 27, 2018 11:07:45 AM

Post a comment