Tuesday, June 06, 2017

Master of Science in Law

On the Faculty Lounge is a report of a new Master of Science of Law initiative at the University of Maryland.  Pleased to see this.  At Northwestern Pritzker School of Law, we are beginning the fourth year of our MSL program for STEM professionals.  There have been various news items on this unique program during its short life span. Check out this podcast for a good overview.  Here is the MSL 360 blog.  And here is a Chronicle of Higher Education article which puts this and related initiatives into a broader context.

At fall enrollment, we will have had over 200 students in this program, on a full-time and part-time platform.  The students come from a variety of professional and educational backgrounds -- bench scientists, technology managers, post-docs in various fields, including biotech, engineering, nanotechnology, etc., and pre-med students.  Many are international.  They are racially and ethnically diverse, more so than our JD class. Graduates of this program have gone into terrificly interesting careers, in law firms, high-tech companies, big corporations (including interesting jobs in the sharing economy), health care organizations, consulting firms, etc.  A handful have pursued additional education, in Medical School, Business School, and Law School.

Paul Horwitz in his comment to the Maryland post inquires rightly into the purpose of these programs, adding a bit of skepticism, which is fair, given the emerging multiple mission of law schools in the difficult environment.  I will say on behalf of our program, this:

We view our MSL as grounded in a vision of professional work in which the traditional silos among law, business, and technology are eroding, and in which T-shaped professionals can and do work constructively with multidisciplinary skills.  Our MSL courses (and there nearly 50) are open only to students in this program; so we are not using excess capacity in law courses for these students.  The faculty for this program includes full-time law faculty, teachers from other departments at Northwestern, including Kellogg, our school of engineering, and elsewhere, and expert adjunct faculty.  There is ample student services and career services support.  

What is remarkable about this program for the Law School generally is that these MSL students are well integrated into the life and community of the student body.  JD students benefit from the presence of these STEM trained students; and the MSL students benefit from working with and around JD students.  They participate in journals, student organizations, and myriad intra and extra curricular activities.  We have experimented with a few courses, including an Innovation Lab, which brings MSL students together with JD and LLM students.  This facilitates the kind of collaboration which they will find in their working lives.

The future of legal education? I won't hazard such a bold prediction.  But I am confident in predicting that you will see more programs like ours -- the first of its kind, but far from the last. Other programs will fashion initiatives that are unique and appropriate to their mission and strategies.  This new model of multidisciplinary professional education is built on sound educational and professional strategies.  It is feasible, financially viable, and responsive to the marketplace.  Isn't that what we want and expect out of legal education in this new world?  Whether and to what extent one or another law school looks to an MSL simply to raise revenue -- as Paul hints in his post -- is a fair question to investigate.  But I can say about our program that its principal purpose is to deliver education to a cohort of STEM trained students who are entering a world in which law, business, and technology intersects and interfaces. I suspect Maryland's program, and others in the planning stages, have a quite similar orientation and mission.   

 

Posted by Dan Rodriguez on June 6, 2017 at 03:31 PM in Daniel Rodriguez, Life of Law Schools, Science | Permalink | Comments (61)

Monday, November 07, 2016

Best writing practices

Hi all, it’s good to be back at Prawfs for another guest stint. I’ve written for this site more times than I can count, but this is my first time guesting as a Texan, having just joined the faculty of the University of Houston Law Center, where I’m also serving as research dean.

In that latter capacity, I’ve been thinking a lot about how to encourage productivity both for others and for myself, and this has led to some reflection on best practices for optimal writing. I’ve found that working on scholarship is the easiest part of the job to put off. Teaching and service typically happen on regular, no-exceptions schedules—classes and meetings require your presence and start and end at specific times—while writing can almost always be delayed until some theoretical future time of idealized productivity.

So in this initial post, I’ll share three of the leading suggestions I’ve read about how to maximize writing productivity based on my admittedly casual perusals of the surprisingly vast literature on this topic (the existence of which leads me to believe I’m not alone in often finding it challenging to stay on-task with respect to writing). The question I’m most interested in is whether these general best practices for writing translate into good practices for legal scholars, and/or whether there are other techniques folks have found helpful.

All this follows after the break.

First: write early. Whether there is an ideal time during the day to write is to some extent idiosyncratic. Charles Dickens and Ernest Hemingway were morning people who cranked out the words when they got up and finished by afternoon. Robert Frost and Hunter S. Thompson were nightowls who got their best work done later in the day. But there is some evidence that most people are best served by writing earlier on, particularly soon after waking up. For one thing, to the extent that writing requires mental focus and will power, those qualities are at their peak earlier in the day, especially the morning before other tasks and distractions have the chance to sap our energy and attention. Neuropsychologists have also found that the part of the brain associated with creative activity—the prefrontal cortex—tends to be the most active earlier in the day, so that if you’re thinking through issues or working out a particularly difficult conceptual problem, you’re more likely to succeed after your morning coffee than your evening dinner.

Second: write regularly. Whether you get your best work done in the dark of the earliest morning or of deepest midnight, one universal nearly all productive writers agree on is: find a pattern you like and stick to it. Part of this is about efficiency. Making writing a regular part of your life makes it increasingly likely that you’ll actually write, turning it into an expected and standard part of your day rather than something you have to spend time and effort making time for. But there’s also the related point that writing regularly makes what can be a challenging task easier. Haruki Marukami unsurprisingly put this much more eloquently than I could in describing his own routine: “The repetition itself becomes the important thing; it’s an act of mesmerism. I mesmerize myself to reach a deeper state of mind.”

Third: write often. One of my favorite quotes about writing comes from the late, great Roger Ebert, who said something along the lines of “I’ve developed a reputation as the fastest writer in town. But I’m don’t write faster than others. I just spend less time not writing.” This is certainly closely related to having a regular schedule (if you commit to writing every day, you’ll likely be writing more often just by virtue of committing to doing so on a daily basis). And this one rings true to me for intuitive reasons. The analogy seems that writing is like a muscle. Exercise it frequently and it gets stronger. Fail to do so and it atrophies.

The question for this audience is: Do these notions, most of which come from looking at novelists or essayists, hold true for legal and/or academic writers as well (I’m not sure that Marukami’s self-mesmerism is something that would be helpful in writing scholarship)? There are a number of potential distinctions: scholarship requires research and entails a different sort of creativity (persuasive argument as opposed to something more akin to pure creativity). And since writing is only part of the professor’s job, is it reasonable to expect to have a regular writing schedule given the need to prioritize students and the competing demands of service? Or does that mean that picking and insisting on a schedule is all the more important?

Finally, consider one alternative approach I’ve observed in some colleagues, which I’ll call the binge-writing model. The notion here is that given the inherent disorder of the academic schedule, it’s not really possible to write regularly, and perhaps not even that effective. I have colleagues who sincerely believe that writing is best in concentrated marathon chunks when blocks of time open up (or if they don’t, in a mad series of sleepless nights). The idea, I suppose, is that this kind of fugue-based approach produces more interesting and coherent work than plodding along gradually, adding a bit at a time.

Again, it’s good to be back and Prawfs and I look forward to thoughts on these or any other best writing practices.

DF

Posted by Dave_Fagundes on November 7, 2016 at 11:29 AM in Life of Law Schools, Science, Teaching Law | Permalink | Comments (7)

Wednesday, August 24, 2016

Sound Symbolism, Trademarks, and Consumer Experience

A recent tweet from Ed Timberlake brought a new study to my attention. According to the authors of the study, beer tastes better when paired with the right music. (It also works with chocolate, among other foods). Possible applications include pairing a six-pack of beer with an mp3 for a curated listening experience.

This connection between hearing and taste reminded me of another line of research I recently mined for my article, Are Trademarks Ever Fanciful? (105 Georgetown L.J., forthcoming 2017). Trademark law presumes that when a word is coined for use as a trademark (like XEROX for photocopiers or SWIFFER for dust mops) the word can't carry any product signifying meaning, so it must be inherently source signifying. That presumption about coined words is not entirely true. In fact, there is a significant body of research into sound symbolism that indicates many sounds carry meaning independent of the words to which they belong. This is true for consonants and vowels, and true even if the word at issue is a nonsense word (like XEROX or SWIFFER).

Courts haven't realized that sounds convey meaning in this way. This is unsurprising because most consumers don't realize it either. But marketers know, and they spend a significant amount of time trying to craft marks that take advantage of sound symbols. In light of this research, the presumption that a fanciful (coined) mark is entitled to instant and broad protection may require some rethinking.

I'm excited to hear your observations about sound symbols and trademarks, or your favorite food/beverage and music pairings, in the comments below.

Posted by Jake Linford on August 24, 2016 at 11:35 AM in Food and Drink, Intellectual Property, Music, Science | Permalink | Comments (5)

Friday, August 12, 2016

Patent Doctrine (& Copyrightable) Subject Matter - IPSC 2016

Patent Doctrine (& Copyrightable) Subject Matter - IPSC 2016

Guest Post by Andres Sawicki, U. Miami

Are Engineered Genetic Sequences Copyrightable?: The U.S. Copyright Office Addresses a Matter of First Impression – Chris Holman, Claes Gustafsson& Andrew Torrance

Data-Generated Patents, Eligibility, & Information Flow –Brenda Simon

Inventive Application, Legal Transplants, Pre-Funk, and Judicial Policymaking –Josh Sarnoff

The Impact on Investment in Research and Development of the Supreme Court’s Eligibility Decisions – David Taylor

The Fallacy of Mayo’s Double Invention Requirement for Patenting of Scientific Discoveries – Peter Menell &Jeffrey Lefstin

 

Holman, Gustaffson, & Torrance—Are Engineered Sequences Copyrightable?: The U.S. Copyright Office Addresses a Matter of First Impression

Holman: Couple years ago, talked about why engineered DNA should be copyrightable. Big conceptual leap was extending c to software, so going to DNA no big deal. A lot of people have made this argument. Irving Kayton around the time when software was deemed copyrightable was shocked, perplexed when asked whether DNA should be too. And then wrote that it should be copyrightable. Drew Endy—synthentic biologist at Stanford—anyone who is involved in synthetic biology can’t understand why you can’t copyright genetic code. Holman worked with founders of DNA 2.0. It’s like Microsoft saying they sell plastic because they sell DVDs with software on it—DNA 2.0 doesn’t sell DNA, they sell the content encoded in it.

Holman and Torrance got the Prancer sequence from DNA 2.0 and submitted registration request to Copyright Office in July 2012. Received generic form rejection in August 2012. In November 2012, submitted appeal requesting reconsideration. Took 14 months until they heard from Director of Copyright Policy, apologizing for delay saying it’s a matter of first impression and an important issue. Director ultimately rejected registration with six reasons. Holman going to refute each.

Main argument from Copyright Office is that engineered sequences don’t fall with statutorily enumerated category. But statute uses “include” and legislative history shows the list is illustrative and not limitative. A lot of this came up in Bikram yoga case too. Nimmer similarly says if something is sufficiently analogous to existing subject matter, it should fit. Goldstein makes similar points, citing House Report.

Copyright Office says Congress amended statute to include software. Holman & Torrance draw analogy between software and genetic code. Amendment introduced limitations on software copyright, and defined computer programs. The CONTU report form 1979 is the basis for why Congress believed that software is copyrightable. Conclusion of report was that no legislative change required to make software copyrightable because it already was. Compare to architectural works amendment, which did add category. CONTU report was policy-based: large investment to create, cheap to copy, needs IP. Same applies to engineered DNA. History, in 1960s, Copyright Office took position software not copyrightable, but led to rule of doubt.

Office also said we can’t do prior art search. But Office doesn’t do searches for anything. Plus, it had no ability back in the 1960s with software. And it should be incumbent on Office to develop capacity.

Office also said no overlapping copyright and patent protection. But Oracle v Google and JEM v Pioneer to the contrary.

Office also said 102(b). But engineered DNA can be protected without violating 102(b).

Functionality arg. But so are copyrightable software programs, which don’t need artistic expression.

Bias against copyrightable expression in a biological system.

Q: Agree with first premise that software can be covered by both copyright and patent. But disagree with ultimate arg, and Oracle is mistaken. You can’t protect functional things under copyright. Software was done for anti-piracy benefits. CONTU retained idea-expression dichotomy. You can protect code, but not underlying functionality. Can you do that for DNA?

A: Yes, lots of redundancy in DNA.

Q: Oh so you can use junk code?

A: It’s not junk. There are an astronomical number of ways of coding for a protein.

Q: So if someone were to take our code for fluorescent protein and reverse engineer, you can prevent piracy of your version, but not their reverse engineered effort.

A: That’s exactly right. There is merger analysis and inoperability and a bunch of things you can import from software. The advantage over patent—imagine these in PAE hands. A troll asserted fluorescent protein patent. DNA 2.0 doesn’t want to prevent others from using function. They just want to prevent piracy.

Church wrote book in English letters and in DNA. So you can use DNA to encode English language.

Q: Why is life plus 70 the right regime?

A: I think it should be shorter, but I still think it’s not so bad because scope of protection isn’t terrible.

Q: Analogy to software a great start, but need to talk more about merger discussions. The premise was that if you can’t implement functionality another way, no protection. For the moment, we don’t know how to code DNA functionality multiple ways. What are the implications of locking up DNA code?

A: Think about Monstano—a farmer saving seeds. That’s piracy. They aren’t changing the code. But, if you have another company with a lab that does same function with different code, that should be permitted. For a small protein, because there are multiple ways to code, if you tried to make the identical protein, there are so many redundancies, if you make one code, there isn’t enough matter in universe to make all possible versions of that function.

Q: In software, I started my career talking about tailoring for software. I think software is a different animal because of functionality and interoperability. I think you’d be better off following a hybrid or sui generis proposal like semiconductor. If industry wants to support you, they might be able to get it. It’s just DNA is not close enough to other copyrightable stuff aside from software.

A: Yeah, when I got the rejection from the Office, DNA 2.0 didn’t want to appeal to federal court. But we are getting a lot of interest from lawyers who are pushing for these kinds of cases. And yeah, I agree that semiconductors and other countries’ fashion design protections might be a better model.

 

Simon—Data-Generating Patents, Eligibility, and Information Flow

Simon: I’ll discuss intersection of patent eligibility and info flow. Sup Ct has expressed concern about patent exclusivity impeding flow of info. Sichelman and I introduced idea of patents that generate data. They are patents that by design generate valuable data. Will talk about Sup Ct language reflecting info-flow concerns. Will discuss whether those decisions express concern not only about downstream tech, but also downstream data. Will also talk about unintended consequences of restricting eligibility.

Examples of data-generating patents. Genetic testing, sleep trackers, heart-rate monitors. True, ordinary use of patented inventions might generate data about ways to improve the design of that invention—e.g., garden hose. Data-generating patents different because data about users or world itself distinct from information about the patented invention. When these patents are issued, might create market power over not only invention, but data as well. Unlike trade secret law that has safeguards like reverse engineering or independent discovery. Tech advances in big data era have made this protection even more valuable.

Sup Ct decisions have expressed concern about some patents impeding flow of info. Karshtedt has a nice paper on Breyer’s parallels between copyright and patent. In Mayo, Breyer pointed to possibility that exclusivity might impede flow of info, and provided some examples. Raising price of using ideas. Cost of searching. Costly licensing agreements. Court later describes ways that patents on laws of nature can impede technology on later discovered features of metabolites, and individual patient characteristics. This latter stuff is something that the patent holder may have exclusive access to because of its data-generating patent.

AMP v. Myriad denied patentability to naturally-occurring genes and the info they encode. Thomas referred back to Mayo, balance between incentives and impeding info flow. Emphasizing info flow characteristics may show court’s concern with patentee leveraging patent exclusivity to exercise power over data gathered as a result of its patent. Myriad refuses to contribute patient data to public databases. That means competitors have limited data to use for their own research. Myriad reaped a bunch of lead-time advantages.

Final decision is Breyer’s LabCorp dissent from cert dismissal. Even way back in 2006, three Justices were commenting that patents can restrict flow of information. Also note that more recent case—Bilski—used similar language. Methods of hedging risk unpatentable. Stevens concurrence cited LabCorp and expressed concern with “Information Age” issues. Patent on foundational technology—sequencing or internet search—that provides for preclusive effect can prevent competitors from generating or accessing data.

Typically, concerns about info flow in patent seem puzzling. Disclosure is the quid pro quo, although we can argue about how effective it is. Data-generating patents raise a distinct issue—the patent allows for the generation of data that is then protected via trade secret.

Primary factor for determining whether they restrict info flow is whether they have a preemptive effect on marketplace competition for data. Contact lens with info about diabetes wouldn’t have preemptive effect because there are other ways to get the info about diabetes. May be problematic from ethical or privacy perspective, but not from info flow perspective.

Unintended consequences. Even if we can narrowly tailor restrictions, we have a problem that we will reduce disclosure overall because inventors will keep these inventions as trade secrets. Maybe we can counterbalance by imposing some data disclosure requirements. Might need some data exclusivity period to ensure that incentives are maintained. Prizes and rewards and sui generis protection possible too.

Q: Will this be industry-specific? Could imagine in software or electronics patents, the data re sales, usage, etc would be valuable in the same way that patient data might be valuable, so unless you want all of the data, you will have to limit by industry.

A: Most concerning patents at first blush are medical for public health reasons. But some of the data you are talking about, there’s a little bit of a blur. Imagine a deep brain stimulator measuring information from consumers. Is that commercial data? For some of these implementable devices or similar things like fitbit—how many hours of sleep you get—how much of that is health data? I would hesitate to implement strong industry-specific limitations. I would focus on preemptive character.

Q: To extent you identify a different class of invention, maybe there is an enablement angle. Maybe some of the data has to be accessible? That would need a significant rethinking of enablement, but it suggests that the standard might be different by data-generating patents.

A: But we’d have to dramatically change enablement requirement because it’s set at time of filing. Some have proposed something like this—maybe at time of maintenance fees. I can only imagine can of worms. It’s an interesting suggestion though.

Q: Can make the case that a lot of data wouldn’t be generated at all. Myriad a great example. Patients’ complaint was that medical community said patients shouldn’t be able to look at their own data.

A: Ted and I wrote a bit about prospect theory and coordination considerations. I think the conclusion to draw from this is that information flow concerns might be undercut by overly limiting patentable subject matters.

 

Inventive Application, Legal Transplants, Pre-Funk, and Judicial Policymaking –Josh Sarnoff

Sarnoff: Lefstin has a very good and thorough discussion of Neilson. Established line between principles in the abstract, which weren’t patentable, and applications, which were patentable. I think Lefstin does a great job of explaining that this is a practical application. From brief, legislative record says no inventive application hurdle. Not sure I agree. Might be overstated, especially with respect to composition of matter. LeRoy and O’Reilly. O’Reilly in particular brings Neilson to US. Two years before these cases was Hotchkiss,

which had human creativity and ingenuity requirement. So question is what do we do with the discovery itself—is that part of the creativity or not? My view is that when the dicta comes to US, we have the inventive application requirement.

I don’t think the true origin of inventive application is Funk Bros. I think Jeff does great job with 1952 Act legislative history, which decided not to overturn Funk. So if that case was inventive application, it would still be the law post-1952. Cases shortly after the Act made clear they understood inventive application is part of the law. Question whether it’s a good thing. And Congress didn’t eliminate ability of courts to modify law, subject to one constraint.

Flook relied on Funk too. Congress hasn’t reputed Funk. Should courts repudiate inventive application? I argued before, Ansonia Brass has the non-analogous use requirement, Reisenfeld said it too; non-analogous use is there for a good reason. We don’t want to lock up what the previous world of human inventions has given us. We want creative advances. There are earlier strains on non-analogous uses. Robinson and Leck say discovery of what nature can do is good, but lock up with patent rights only things that nature doesn’t itself do. Non-analogous things of what nature does. If new use is inventive in the other sense, it is non-obvious.

Why should we care? Mostly non-utilitarian moral views from 18th century. McCleod says this is God’s gift and locking up is a moral sin. Lockean view that we owe equal concern for each other. I think this is a better way to put it. Science itself shouldn’t be protected by patent rights. We can allow patents for inventive applications that aren’t just the world itself. If there are applications that allow you to effectively lock up the natural thing, then just do away with restriction on locking up natural thing.

Comparative perspective—scientific results are the common property of all mankind.

Prior art treatment dicta are followed, inventive application is required. Courts remain free to establish eligibility rules regarding applications of nature according to best policy.

Q: Under your view, you give Morse a claim—why not recognize that a lot of this area is overclaiming, and that was something potentially at issue in Neilson. Having a practical application is good. When you have a scientific law or principle, implicit in that is some fairly substantial scientific advance. What we don’t have is when is a discovery eligible?

A: If we’re gonna allow non-practical application, we might as well allow patenting discovery. If that stuff should be free for all to use, then granting a limited application just because you were the first makes no sense. If you can write the seventeen applications, that makes no sense too.

Q: Does statute matter at all? Two points. At time of Funk, for many authors, ingenuity was tied to [didn’t catch this]? How do we deal with Federico, finding a new property of a substance is ok—why?

A: You still had to have a non-analogous use. In terms of statute, it matters as a presumption. Once you recognize Funk was the law, Congress didn’t reverse, courts after

recognize it, then the answer is the law. Since then, other courts have played around and changed it. Now the Supreme Court is changing too. If you care about fidelity to 1952, fine. But I think Congress left it open in 1952. Didn’t change it but also didn’t prevent changes. Only question if there is a constitutional constraint on treatment of scientific discoveries. I don’t think Court will go there, esp post Eldred, but there’s a non-frivolous argument.

Q: Court has tended to lump all the excluded categories together. And Mayo suggests it doesn’t matter how narrow or specific your principle is.

A: Science and nature is pre-existing. Abstract ideas—we have no idea what that means. I view it as some abstract ideas are fundamental to how the world works. I am more sympathetic to smaller things that should be treated as human creativity. But we have no theory of abstract ideas so it’s a total mess. Still doesn’t tell you most important question—what level of incremental creativity, which requires some theory of difference. If we went to invention, we would have some theory. That’s something courts can build over time. Preemption makes no sense. I think Breyer has made a hash of the law. I would rather see us fund scientific discovery through taxes, but who knows.

 

The Impact on Investment in Research and Development of the Supreme Court’s Eligibility Decisions – David Taylor

Taylor: Haven’t done the empirical work on this yet, but just getting started. Interested in impact of 101 on R&D. We all know the two-part test. Caveat that I am reporter for AIPLA 101 task force, and this doesn’t represent their views. But lots of people view the current test as bad and are thinking about going to Congress. Would be nice to have data about extent to which Court’s decisions have impacted investment. I want to study that by surveying VC to test hypotheses. First hypo: four recent decisions have impacted decisions. Second: Impact significant. Third: Impact negative.

Want to look at diverse firms at various stages of VC funding and tech areas. Two types of questions. Have Court decisions impacted, including knowledge of decisions. And then indirectly without mentioning decisions, how has decision-making changed over time. I want database of VC firms from 2016 and from 2009. Plan to ask about activities from 2006-15. Survey only about US.

Some ideas. “In your opinion” questions about effect of patents on investment in invention and marketing. Meat of survey—on the left are tech areas and then effect on each area. Next question is on effect on each industry area. Are they aware of the decisions by name. Effects of particular cases on financing. How much, which decisions, and what has impact been—direction, tech, and industry. Between which tech areas—out of which and into which. Some broad questions about type of financing in time period. Then, if willing, amount of money invested by year. Want to go to 2006 to get behind Fed Circ Bilski decision. Can also look at size of investor.

Where it gets difficult and ugly is to ask particular questions about type of tech are by percentage by year. And then more complicated by industry. That’s it.

Q: Useful to get more granular to ask about whether decisions change to ability to get patent, changes to likelihood of getting sued, and then changes to uncertainty?

A: I like that. Especially the question about uncertainty.

Q: A question to get a bead on—decision of VC to fund a company is hugely complex. Questions to get directly at that? How do patent eligibility cases affect your decisions?

A: Maybe ask some sorting or ranking of factors about relevant importance?

Q: I’d change order of questions. You raise salience of decisions at the outset. I’d ask questions about investments first. And make them coarser so your response rate goes up. And then you will want to bury salience of eligibility. Also, you should do a five-point scale rather than three-point.

A: Great. Thanks.

Q: Narrow point—is patent protection important at all? And why? Often hear that it is part of an exit strategy. Broader question—why should we care? Real question is substitution effects. Ask legislators if they are going to change funding decisions in response.

A: Also why limit to patents?

Q: That’s my question too. Ask about non-patent appropriability mechanisms. For the people who answer no, this isn’t important to me. Question if that’s because they have substitutes available.

Q: You can still ask patent-specific questions, just do it at the end, and don’t let them go back.

A: Thanks.

 

The Fallacy of Mayo’s Double Invention Requirement for Patenting of Scientific Discoveries – Peter Menell &Jeffrey Lefstin

Menell: Want to fully credit Josh for his influence on Mayo. This is unusual for me because I usually do normative work, but this is historical, and Jeff is great with that. So we are doing a historical, interpretive paper. We agree that biosciences is an area where patents have been relatively successful. Four points. First, not explicitly in brief, when you peel back layers of Mayo, you realize Josh wrote it. Second, we don’t have any briefing or cases on a key word in the statute: “discovers.” Then we’ll go to a couple of historical points.

One q I have come to appreciate, it would be amazing if Justices could figure out everything on their own. This idea of omniscience is under-addressed. But they are human. Opening brief and petitioner briefs, mostly short. Most influential was Josh’s. Then a bunch on either side, including the government, saying this patent is just garbage on 102 or 103, and leave it at that. When we get to the outcome, the Court got the language from Josh. He talks about the requirement of prior art treatment, from O’Reilly. I think it was a mistake, though.

I’ll talk about the statute. There is this word “discovers.” I was taught to start with text, then structure, and then go higher up if you like. It is in Constitution, and then in patent acts throughout the years carry the language. Order has changed, but we’ve had discoveries throughout. Then a finding from Senate Report in 1836, that talks about much has been discovered but much unrevealed, and the mysteries of nature unfolded. He was talking about science. So no question at beginning of industrial revolution we are talking about science. And then it’s not discussed until the Plant Patent Act of 1930.

Lefstin: Assuming we care about statutory text, if you go to legislative history, a lot of concern about whether discovery of plant was constitutionally patentable. The meaning of invent at time of Constitution included discovery. For Plant Patent Act, we get some examples of what it means to invent or discover. Includes finding a plant in your backyard. Why do we care? Because Congress uses utility patent statute as home for Plant Patent Act. It reads “invented or discovered.” Maybe text doesn’t matter. Or maybe conventional propagation was in first and second. Or maybe they mean different things in the two places it comes up in statute, which I don’t think is tenable. 1952 reports indicate no change. I am less convinced than Josh about Funk Bros implications—don’t think Congress knew about it.

Menell: Another great discovery. Credit Jeff for discovering true meaning of Morse case. He claimed all uses of electromagnetism. In doing so, they cite to hot blast cases. The language is we must treat the case as if the principle was well known. Why? Out of context, Stevens and Breyer are making a plausible interpretation, but it’s completely incorrect. They were using this to postulate that if this were the same as the Minter case, it would still fit within the category of a machine. Morse got it right—unlike Neilson, where all methods of pre-heating improved blast furnace, not all methods of electromagnetism communicate at a distance. Stevens found that same language out of context in Flook and said treat the algorithm as unknown. We forgot about this because Diehr, which we thought overruled it.

Then we get to Mayo. Quotes the same passage. The question was: did Neilson inventively apply pre-heating? Yes, says Breyer, here are the unconventional steps. Jeff points out that Neilson case rejects inventive application argument.

Jeff: Main challenge of Neilson was enablement problem.

Menell: Treating the principle as well-known was about whether this qualifies as a machine. Under the Minter case, which had to do with self-adjusting chair, they decided it was a machine. We wanted in Sequenom to realize they misread the case. If you look at district courts now, it’s a disaster. And Patent Office is worse. So now algorithms are no longer relevant to invention. Go ask Forsyth Hall people. Can’t do AI without algorithms. I would take view that let’s take software out. But I think we’re going to spend another decade chasing our tails with a dysfunctional system.

Q: Quick point. I totally agree with Jeff on discussion of English cases. Clearly, they didn’t understand language they were reporting. They didn’t have substantial novelty requirement. We had just gone to non-obviousness. Then what do we make of the import of the language into our system, which was different from the other system. Who knows? But we did build on it in a variety of ways. Point is Congress hasn’t adequately addressed. Courts

have gone back and forth, and it’s really an open question. Courts are free to enforce inventive application, Congress is free to get rid of it entirely.

A: If Court wants to adopt the rule, then that would be fine. But they have to deal with text. Yours was the only brief with any substance on this issue. There is no case dealing with “discovers.” There are people who think—I don’t think religion solves the problem. A lot of my students say E=mc2, but the annealing patent is just understanding the properties of copper.

Q: How much has to do with the loss of a real utility requirement in patent law? In terms of practical application, seems that we’re just talking about utility requirement. Why aren’t people pushing the reinvigoration of that doctrine? One of the old cases talked about practical utility. That gets rid of a lot of the abstract idea problems.

A: But that creates the problems of State Street. A useful Arts test with utility req might work. Even in Alice, there was concurring opinion that said Stevens was right. Court should’ve taken State Street. Unfortunately, Court isn’t selecting cases right. I’m really upset they took Apple v. Samsung on the wrong issue. And then they take the cheerleader case, which is much worse if you want to define functionality. They aren’t getting the right issues. Maybe there were other things about Sequenom that troubled them. We need to error correct. But I don’t think the Court will.

Industrial utility highlights for me that these are legislative determinations.

Q: Alex Kasner piece in Stanford is great. What did they mean by “discovery”? And he does a long historical analysis.

A: When I think about Sequenom, I am astounded that the Supreme Court didn’t want to take this on. It is a breakthrough. It is a woman’s health issue. I’ve been told by Ariosa representatives that the patent was poorly drafted. But why wouldn’t it make sense to at least say this is the kind of breakthrough that is eligible. I don’t see that the Court has a good grasp of how people will respond. Seems odd to say if you do Nobel quality research, you are ineligible.

Posted by Jake Linford on August 12, 2016 at 11:20 PM in Blogging, Information and Technology, Intellectual Property, Science | Permalink | Comments (0)

Thursday, August 11, 2016

IP, The Constitution, and the Courts - IPSC 2016

IPSC 2016 - Breakout Session III - IP, The Constitution, and the Courts

Lexmark and the Holding Dicta Distinction – Andrew Michaels

A Problem of Subject Matter: Patent Demand Letters and the Federal Circuit’s Jurisdiction – Charles Duan & Kerry Sheehan

Established Rights, the Takings Clause, and Patent Law – Jason Rantanen

A Free Speech Right to Trademark Protection? – Lisa Ramsey 

Lexmark and the Holding Dicta Distinction – Andrew Michaels

How do we distinguish dicta from holding? This project uses the Federal Circuit's dispute in Lexmark (on remand) over the breadth of the holding in Quanta. As Paul Gugliuzza summarized it for me (I was a late arriver), Michael's argument is that, rather than treating holding/dicta as a binary distinction, we should envision a spectrum of the types of things that courts say in their opinions. 

A spectrum approach to holding v. dicta might helpfully restrict courts. If a holding says "No red convertibles in the park", we might worry about a case where a subsequent court says the opinion requires a holding of no vehicles in the park. They are not unrelated, but perhaps still dicta. Broader statements should have less capacity to bind than narrower holdings.

Jason Rantanen: This is interesting. We often see doctrinal pronouncement in Federal Circuit's case, much broader than necessary to decide the case. We also see language from earlier court opinions that are clearly dicta. Panels in the Federal Circuit nevertheless use it later. I wonder, however, whether we should take into account how the court is using the language. For example, do we bind the court to holding language only, or might they be appealing to the persuasiveness of early reasoning. Your spectrum focuses on text as it appears in the early opinion, but is that too narrow? Can dicta apply? 

Andrew - Sometimes dicta is well considered. But if the court pretends it's a holding, and acts as if it is bound, then they are failing to adjudicate the dispute, and that's a problem.

Paul Gugliuzza - I think the Federal Circuit may engage in some over-use of dicta. Is there a prescriptive payoff to this spectrum? How does the court determine whether to follow the statement or not?

Andrew - The payoff is to require courts to deal more directly with the question of dicta.

Pam Samuelson - I think it's interesting when dicta becomes a holding, over time, and solves a problem. For example, the 3rd Circuit (Whelan) case had a lot of broad dicta that led to a lot of litigation. But the 2d Circuit also included a lot of dicta in Computer Assocs. v. Altai, and the dicta from the that case seems to have knocked out Whelan, and been followed, correctly from Pam's view, in many other circuits.

A subsequent observation from Paul: I think the spectrum provides an interesting descriptive contribution, but I wonder whether, instead of arguing whether a statement is holding or dicta, we'd just end up arguing about (1) where on the spectrum a particular statement falls and (2) whether, given its location on the spectrum, it's binding law or not.

 

A Problem of Subject Matter: Patent Demand Letters and the Federal Circuit’s Jurisdiction – Charles Duan & Kerry Sheehan

States are passing laws designed to cabin patent demand letters. We might presume that the Federal Circuit has primacy, but this paper argues the question isn't so cut and dried. The Supreme Court, in a case about attorney malpractice, held that there should be a balance struck between the interests of the federal courts and the state's consumer protection laws.

In a demand letter case, we could ask whether 1) this raises a sufficient issue of federal patent law, and 2) is the law unconstitutional or improper. To understand the second question, look to the Federal Circuit's Globetrotter case. The patent holder threatened to send letters to the defendant's clients. The defendants sued for tortious interference, and Fed. Cir. held that the Patent Act preempted acts that prevent sending demand letters.

We argue there is an odd disconnect in the Federal Circuit's analysis. It's a mistake that makes the Federal Circuit's jurisdiction appear larger than it is.

What is the right policy outcome? Should the Federal Circuit have primacy here? The uniformity issues that inspired the creation of the Federal Circuit doesn't necessarily reach every case that touches on patent law, and perhaps these demand letter cases are outside the needs of the uniformity requirement.

Jake Linford: I'm unclear on where the line is between the stuff the Federal Circuit controls and the stuff it doesn't. It sounds circular to me. Help me understand.

Charles: The Supreme Court doesn't take the view that the Federal Circuit is the final arbiter of all patent issues. The Christensen and Gund cases are examples where the Supreme Court put the responsibility with the Seventh Circuit and Texas courts respectively. Questions of validity of the patent may go to the Federal Circuit, but not claims about a clearly invalid patent.

Lisa Ramsey: One of the reasons this is so important is because people will get different results before a state court than the Federal Circuit. Is that right?

Charles: It's unclear. If we sort some cases for the Federal Circuit and others for the states, we might get divergent outcomes.

Pam Samuelson: How does the issue of validity of the patent get to the Federal Circuit if the case starts in state courts? 

Charles: Removal is the mechanism. 

Pam: If so, then how do we take the ability of the Federal Circuit away? If the Federal Circuit decides whether it has jurisdiction...

Charles: Perhaps the Supreme Court takes cert?

Paul Gugliuzza: What triggers the arising under jurisdiction of the patent clause? Isn't this a matter of patent jurisdiction?

Charles: I'm not sure this meets the Constitutional language...

Paul: The Federal Circuit may rely on Globetrotter, even if I disagree with them. 

 

Paul Gugliuzza sent me the following summary of the Duan - Sheehan paper, which I find much better than my own:

The paper focuses on state law tort/unfair competition claims against patent holders, such those brought under the new anti-troll statutes adopted in over half the states.  As a substantive matter, Duan and Sheehan criticize the Federal Circuit for giving patent holders nearly absolute immunity from civil claims based on their enforcement behavior, an issue I’ve written about here:  http://ssrn.com/abstract=2539280.  As a matter of institutional policy, they argue that the Federal Circuit is poorly suited to assess the constitutionality of laws regulating patent assertions because the court has embodied various problems theorized to be associated with specialized courts, such as rule-orientedness, a detachment from broad policy concerns, and, perhaps most importantly, capture.  The Federal Circuit’s orientation toward patent holders, they seem to be arguing, would make the court too suspicious of government efforts to regulate patent holders.  Accordingly, they make a doctrinal argument that a challenge to the constitutionality of an anti-troll statute does not “arise under” patent law, as is required for the Federal Circuit to have appellate jurisdiction.  
 
I’m not sure about this.  I agree that, after the Supreme Court’s 2013 decision in Gunn v. Minton, a civil case challenging patent enforcement behavior does not “arise under” patent law.  The embedded patent law issues would be about the validity or infringement of a particular patent—the sort of case-specific issues that are not sufficient to create “arising under” jurisdiction.  But, in my mind, there’s a distinction between those case-specific issues and those that would be raised by a counterclaim seeking a declaratory judgment that a state anti-troll law is unconstitutional.  I suspect the Federal Circuit would say that THAT claim DOES “arise under” patent law, as it raises the issue of whether federal patent law “preempts” state law.  After the AIA’s so-called Holmes Group fix, that counterclaim would be sufficient to confer jurisdiction on the Federal Circuit.  Perhaps a better argument against Federal Circuit jurisdiction is that the federal issue is not preemption by the Patent Act, but the constitutionality of the statute under the First Amendment.  In that circumstance, the case would arise under federal law, but perhaps not federal PATENT LAW, meaning that the Federal Circuit would NOT have jurisdiction.  (In the article linked above, I argue that the Federal Circuit has erroneously stated that immunity for patent holders is about “preemption” of state law when, in fact, the court is actually drawing on the First Amendment right to petition to the government.)  In any event, this is an interesting and provocative project.  And if you’re still reading at this point, cheers to you for your commendable enthusiasm about patents and procedure!

 

Established Rights, the Takings Clause, and Patent Law – Jason Rantanen

Recent arguments have suggested that when patent laws change, the takings clause may be implicated. I wanted to understand the analytical reasoning behind the takings claim. Takings case law is a deep, Alice-in-Wonderland rabbit hole.  How does it actually apply to patent law?

1) Jason agrees that patents are property subject to takings clause. (The Federal Circuit said no, in Zoltec, when the government infringes the patent. The Supreme Court, instead, suggested in dicta in the raisin takings case, that patents are the type of property subject to the takings clause)

2) But it's inappropriate to cut and paste takings case law to patent cases. Patents aren't like rights in real property. We know what a takings of a coal mind looks like. Patents aren't the same. In addition, one key right "taken" is the right to use, and the patent holder doesn't lose the right to use, only the right to exclude or alienate. So application of standard takings cases is difficult.

3) The question is instead whether the new law changes or destroys an "established property right" in the patent. That's the taking, if there is one. What's an established property right? The type associated with property, established with a high degree of legal certainty. See, for example, the Penn Central case, where the Supreme Court is looking for certain rights. If we are looking for high degree of legal certainty, many aspects of patent law has changed significantly and frequently over time. Patent has replaced the entire statutory framework at least four times, with only very minor exceptions. For example, when Congress passed the 1836 Patent Act, it replaced the prior act, and also applied the new act to pending litigation. There are many similarities, but this is a new draft. Same with the 1952 Act: "It shall apply to unexpired patents." Damages changed dramatically, as summarized in Halo v. Pulse. Patent owners used to get treble damages automatically, and they don't anymore. Patent holders in 1836 lost that right while claims were pending.

Lisa Ramsey: One argument against cancellation in the Redskins case is takings. 

Jason Rantanen: The Redskins case considers whether the right was valid in the first place, which falls outside of standard takings analysis.

Camilla Hrdy: You may want to consider why the Supreme Court has held a trade secret can be taken. If so, why not a patent?

 

A Free Speech Right to Trademark Protection? – Lisa Ramsey 

The Federal Circuit recently held that the 2(a) bar against registering disparaging trademarks is unconstitutional. Lisa's paper aims to make two unique contributions to literature on disparaging trademarks and the First Amendment:

  1. Is there a right under international treaties to be able to register a disparaging or scandalous trademark? The answer is no.
  2. A framework of six elements that should be applied in deciding whether laws against offensive trademarks run afoul of free speech rights.

The U.S. is not the only country that bans registration of scandalous marks. Canada even bans use. 

We are members of the Paris Convention, which gives signees the discretion to decide whether to deny a registration on the grounds that a mark is contrary to morality or public order.

Lisa's framework (and 2(a) seems to meet most of these conditions):

  1. Is there government action? Who regulates the expression?
  2. Suppression, punishment, or harm: How does the regulation harm expression? Are there unconstitutional conditions imposed on speakers by denying the benefit? Lisa says no, because the benefit being denied is the right to restrict the speech of others.
  3. Expression. What is being regulated?
  4. Is this individual or government speech? Whose expression is regulated?
  5. No categorical exclusion for the expression: Is the regulation justified because of a categorical exclusion, like obscenity or misleading commercial expression?
  6. Does the regulation fail constitutional scrutiny? Is it content-neutral or content-based? That triggers different levels of scrutiny in the U.S.

What could the Court do if it wants to uphold 2(a)? 1) Say it's not suppression or punishment, and the unconditional conditions doctrine does not apply, under factor 2. 2) It satisfies the scrutiny under 6. 3) Make a "traditional contours" argument like in Eldred and Golan. 

Saurabh Vishnubhakat: Pushing on Lisa's state action analysis, if we apply Shelly v. Kramer broadly (where the Supreme Court refused to allow the enforcement of racially restrictive covenants in court, and which may be limited to its fact), that may suggest everything is potentially a state action?

Rebecca Tushnet: If the Court is taking a "hands off" approach to conflicts between trademarks and the First Amendment, then doesn't hands off mean no registration? Isn't that state action?

Lisa: It is state action.

Rebecca: Then isn't everything state action.

Lisa: There are real benefits to registration that impacts the first amendment. Demand letters work better when backed by a registration. And when you have a registration, it's easier to push claims that some see as questionable, like dilution and merchandising cases.

Charles Duan: When it comes to disparaging marks, those have particularly strong expression value - used to express feelings, and therefore even worse to restrict than other registrations.

Lisa: Exactly!

Pam: Is there an international standard?

Lisa: No, as I read the law, each country has discretion to set up the system it prefers.

Posted by Jake Linford on August 11, 2016 at 08:45 PM in Blogging, Civil Procedure, Constitutional thoughts, First Amendment, Information and Technology, Intellectual Property, International Law, Judicial Process, Property, Science | Permalink | Comments (0)

Monday, August 01, 2016

Update on PrEP Access

As a follow-up to my initial post on barriers to accessing pre-exposure prophylaxis (PrEP) as a means of preventing HIV, I wanted to highlight new numbers provided by Gilead, the maker of the only FDA-approved PrEP pill—Truvada. According to Gilead, more than 79,000 people started using Truvada as PrEP in the U.S. during the period of 2012-2015, based on a survey of retail pharmacies (this number may be an underestimate because it does not include certain prescription programs). Recall that the CDC has suggested that over 1.2 million people have indications for PrEP. While the number of people starting PrEP has grown each year, Gilead indicated that those using PrEP are disproportionately white. As discussed, HIV is disproportionately spreading among black people (in 2014, 44% of new diagnoses were among black people, notwithstanding that black people accounted for 12% of the population). This seems to confirm that access to PrEP as a means of preventing HIV, like access to health care more broadly, has been uneven and that efforts to expand access through Medicaid expansion and awareness campaigns need to be strengthened.  

Been great visiting this month!  Thanks to Howard for the opportunity!

Posted by Scott Skinner-Thompson on August 1, 2016 at 11:09 AM in Current Affairs, Gender, Science | Permalink | Comments (0)

Friday, July 01, 2016

Expanding Access to the HIV Prevention Pill, Truvada

Thrilled to be guest blogging with Prawfs this month!

To kick things off, I thought I’d highlight some of the barriers that are preventing widespread access to Truvada, a once-daily pill that can help prevent infection with HIV even if exposed to the virus. Although approved by the FDA for use as pre-exposure prophylaxis (or “PrEP”) in 2012, awareness of Truvada as a tool for preventing the spread of HIV is not universal, and several barriers to uptake exist.

According to the CDC, daily use of PrEP can reduce the risk of getting HIV from sex by over 90%. Importantly, Truvada is not a replacement for condoms, and should be used with condoms (particularly since Truvada doesn’t prevent other STDs). The CDC recommends that those at “substantial risk” of HIV consider taking Truvada. In America, about 1.2 million straight and queer people engage in behavior that puts them at “substantial risk” of HIV, and yet the number of people taking Truvada as PrEP numbers only in the tens of thousands. If taken more widely, PrEP could meaningfully reduce the number of people infected with HIV each year, which has remained steady over the past few years at about 50,000 new infections each year. (More than 1.2 million people in the United States are currently living with HIV).

As outlined in a wonderful new report by Duke Law’s Carolyn McAllaster and the Southern HIV/AIDS Strategy Initiative (SASI), the key barriers to PrEP uptake include lack of awareness, stigma, and cost/access. Of these, I want to draw attention to two key points.

First, as recognized by the White House’s National HIV/AIDS Strategy, HIV stigma remains one of principal roadblocks in preventing, detecting, and treating HIV. In addition to discouraging PrEP, HIV stigma contributes to what is known as the care continuum, where, according to estimates, roughly 86% of those with HIV are diagnosed, only 40% are engaged in care, and only 30% are virally suppressed through use of anti-retrovirals. But, unfortunately, certain government policies, such as the FDA’s blood donation deferral policy toward gay and bisexual men and laws that criminalize HIV transmission, stigmatize HIV and push it further into the shadows. But there is also PrEP-specific stigma, with some suggesting that those who use Truvada are promiscuous and irresponsible, when, in reality, taking PrEP is sexually responsible.

Second, as the SASI report notes, while HIV is disproportionately spreading in the South and, there, disproportionately among black women and black men who have sex with men, most Southern states have not adopted Medicaid expansion. Why is this significant? Medicaid and most private insurers will actually help pay for Truvada, which costs about $1,300 a month. But nearly 3 million adults fall in the so-called “coverage gap” between traditional Medicaid and the Affordable Care Act’s insurance subsidies (a gap that Medicaid expansion would cover). And 89% of people in the coverage gap are in the South, the region most in need of HIV prevention tools. As such, without Medicaid expansion, millions of people lack health insurance, including many who may have indications for PrEP.

That’s enough for now, but for those interested in additional steps that can be taken to expand access to PrEP and prevent the spread of HIV, I once again recommend the SASI report!

Posted by Scott Skinner-Thompson on July 1, 2016 at 02:42 PM in Current Affairs, Gender, Science | Permalink | Comments (4)

Monday, April 11, 2016

Reality Checks

Over the last few years, I've taken to writing about emerging tech and criminal law.  As a childhood fan of science fiction, it's fun to get to think about technologies that are similar to those I read about as a kid.  In particular, I have a blast thinking about how the law will or should handle what I predict will be very-near-future technologies.  So, for instance, I've written about algorithms taught through machine learning techniques to identify individuals who are likely to be presently or very recently engaged in criminal activity (e.g., an algorithm that says that that guy on that street corner is probably dealing drugs, or that this on-line sex ad (and whoever posted it) is probably related to human trafficking).

At the time I wrote the piece, there were no algorithms that exactly fit what I describe.  There were computer systems that identified individuals in real-time as they engaged in activities that human operators had already decided correlated to criminal activity, and there was research ongoing using machine learning to identify activities that correlate to criminal activity, but no one had put the two together.  As I saw it (and perhaps it is the sci-fi fan in me), it was just a matter of time before the two came together to create the kinds of algorithms I discuss.

A source of frustration for me when I presented on the topic, then, was that inevitably one of the first questions I'd get would be whether the technologies I discussed really exist.  I'd explain what I just said in the prior paragraph, but nonetheless I'd feel defeated in some sense, like my legitimacy had been undermined.  And I can see many reasons for the questions: curiosity, to understand the technology better through an example, and skepticism about the validity of discussing something that doesn't exist, to name a few. 

But the questions still bothered me.  And they got me thinking:  To what extent should we talk about the legal implications of things that we believe are about to happen, but which haven't happened yet and therefore may never happen?  What is our obligation as scholars to prove that our predictions are correct before engaging in legal analysis?  Is this obligation higher in some areas of law, like criminal procedure, that traditionally have not been consistently forced to adapt to technological developments, and lower in areas of law, like intellectual property, that have?

Posted by Michael Rich on April 11, 2016 at 12:16 PM in Criminal Law, Science | Permalink | Comments (8)

Wednesday, February 24, 2016

I, for one, welcome our new robot Law Lords.

Friends, I've been a terrible guest-blogger so far this month. My apologies. Life (and teaching... mostly teaching) intervened.

But one of the things I'm teaching is an experimental yearlong project-based seminar called the Policy Lab (link is somewhat obsolete), where students spend the first semester learning about an area of legal policy, and the second designing innovations to work on it. And for this first run-through, students have been thinking about legal technology and access to justice. They've learned about things like predictive coding, multijurisdictional tech-driven delivery of legal services, and artificial intelligence, and they've had virtual as well as physical visits from experts and people making waves in the area, including Dan Katz, Jake Heller, Stephen Poor, Tim Hwang, and Craig Cook, as well as more local folks---and now they're working on designing (though not fully implementing) technological tools to provide legal knowledge to nonprofits, as well as policy analyses of, e.g., the ethical implications of such tools. I'm really proud of them.

I'm also a confirmed parking and traffic scofflaw, who once beat a parking ticket with a procedural due process claim, and also once beat a speeding ticket by getting testimony about the laser evidence chucked on the good-old Frye standard (back in grad school, when that standard applied in California). So imagine my delight when I saw this story: "A 19-year-old made a free robot lawyer that has appealed $3 million in parking tickets". A Stanford kid, Joshua Browder, has written a webapp that (as far as I can discern without trying it out or seeing the code) quizzes people about their parking tickets (U.K. only, alas) in natural language, invokes what is sometimes called an expert system to discern a defense for them, then provides an appeal for them to file. Obviously, I have lots of questions and thoughts about this after the fold.

First, is this legal in the U.K.? How do folks feel about the unauthorized practice of law on the other side of the pond? And what about California? On some aggressive interpretations of UPL rules, we might think that the awesome kid is practicing British law in California. As this kind of service, and the services provided by companies like RocketLawyer, LegalZoom, and the like become more customized, and interact with people more like lawyers interact with clients, the UPL questions are going to get harder and harder. The natural language aspect of the parking ticket thing feels to me more like legal practice: you can easily imagine a client trusting an interactive, English-speaking app more than they might trust a more web 1.0 or 2.0 system of drop-down menus and such. Are the regulators going to quash this (especially now that he's looking to expand to New York), or are they going to get out of the way?

Second, to me, this level of legal tech innovation seems like an unmitigated good. Is there anyone scrutinizing the behavior of parking enforcement authorities right now (given that it's far too small-fry in most cases for lawyers), or is the parking ticket system in many cities nothing but taxation by another name, buttressed by the total lack of any real opportunity to challenge them? Browder might look closer to his temporary home, given that San Francisco is kind of notorious for its abusive parking tickets and they've been resisting the use of other automated systems to squeeze out a droplet of due process from the machine. As I've argued previously on this blog, nickel-and-diming people to death with penny-ante law enforcement directed at ordinary day-to-day behaviors is a threat to the kind of ideas underlying the rule of law, and maybe software can fix it where lawyers can't.

Third, to fellow prawfs: as folks like Dan Katz and Oliver Goodenough keep reminding us, this is coming to the rest of the law. Right now, the advances seem mostly to be looming over the discovery process, with stuff like predictive coding threatening to be the second level of the inexorable process of stripping the legal profession of the rents generated by document review (where outsourcing and offshoring were the first), as well as to relatively small-scale stuff like parking tickets, leases, etc. for small players. But as the technology gets more sophisticated, it has the potential to supplement or replace lawyers in more areas of law. (Right now, the most hubristic claims are being made by an early-stage startup called Ross... but what happens if those claims turn out to be even sort-of true?) What can we as law professors do about it?

One option is to get a lot better about teaching our students to be more comfortable with technology, as users as well as creators, even to the point of trying to teach them programming and machine learning. That's a strategy I'm interested in exploring further, but I also have some skepticism about it. It doesn't obviously follow from the danger of technology supplanting lawyers that the lawyers who will be best positioned to survive are those who are capable of operating in both domains. Whether that's true depends on the shape of the ultimate market: will it actually demand people with both legal skill and technological skill (perhaps to translate from one to the other), or will it favor people with pure technological skill plus a handful of really good lawyers to handle the most high-level work? My crystal ball isn't sharp enough to tell me, though I'm encouraging my students to tech up to the extent possible in order to hedge their bets. But what else can we do?

Posted by Paul Gowder on February 24, 2016 at 06:55 PM in Life of Law Schools, Science, Teaching Law, Web/Tech | Permalink | Comments (0)

Tuesday, December 01, 2015

World AIDS Day: Non-disclosure, Criminal Law, and Contracts

Many thanks to Prawfsblawg for hosting me this month!  I look forward to discussing my scholarship and sharing some of my favorite cat videos in the coming weeks.  I thought I'd start, however, on a more sober note:

Today is World AIDS Day, and I wanted to share two recent items about how the law handles--and mishandles--issues of HIV disclosure.  The first is this excellent, yet disturbing, write-up of the trial of Michael Johnson, a black, gay, HIV-positive college wrestler given a 30 year sentence for not disclosing his HIV status to his sexual partners.  Although Johnson maintains that he in fact disclosed his status, the article does a good job connecting his conviction to issues of racism, homophobia, and a widely held (and mistaken) belief that no one would have consensual sex with someone HIV-positive.  Johnson's case highlights an increasingly wide schism between highly punitive non-disclosure laws and today's reality of HIV treatment and prevention.  Current treatments allow HIV-positive people to have a life expectancy roughly comparable to the average US population and can reduce viral loads to undetectable, nontransmittable levels.  The best way to prevent the spread of HIV is through testing and treatment, yet criminalizing non-disclosure can deter people from getting tested and taking on the legal obligations that might come with their results.

The other item concerns, perhaps unsurprisingly, Charlie Sheen.  Much has been written about Sheen's potential legal issues in the wake of his HIV disclosure (see, e.g., here, here, and here), but I wanted to focus on one interesting detail.  Sheen reportedly required his sexual partners to sign a non-disclosure agreement, with liquidated damages of $100,000, covering any personal or business information obtained during time spent with him.  The NDA was exclusively leaked to the esteemed repository of legal research, InTouch Weekly.  My initial reaction to the NDA was in line with with most others: forcing young women to sign a contract before sex seems sleazy and censorial, designed to insulate potentially humiliating, abusive, or exploitative behavior.  After thinking some more about Sheen's circumstances, however, things may be a bit more complex and perhaps sympathetic.  As highlighted in the previous paragraph, Sheen's HIV status put him in a rather difficult bind.  If he complied with his legal obligation to disclose his status, he faced the high likelihood that his status would either be sold to the press or used as blackmail (which reportedly it was).  And even though Sheen had an undetectable viral load--and thus posed minimal risk of infection to his partners--he was at the very least arguably under a moral obligation to disclose that risk.  An NDA in these circumstances might thus be a way for Sheen to disclose his status while navigating the unique circumstance of being an HIV-positive celebrity.  This is certainly not meant to beatify Sheen, but it highlights an effort to use contract law to organize intimate affairs in the face of continued fear, stigma, and misinformation about sex and HIV. 

(By the way, aside from the bigger policy issues, Sheen's NDA is chock full of geekery: sexual consideration (see my student note!); arbitration clauses; copyright assignments (more here); and contracting for irreparable harm) 

In the spirit of World AIDS Day, I hope this post will encourage a few more people to learn about the current state of HIV and AIDS, both in the US and abroad.  Here are a few useful links I've come across in the past few weeks:

The HIV/AIDS pandemic, explained in 9 maps and charts

Things You Should Know Before Discussing Charlie Sheen's HIV Status

Pill to prevent HIV faces critics, stigma

 

Posted by Andrew Gilden on December 1, 2015 at 03:41 PM in Criminal Law, Culture, Intellectual Property, Science | Permalink | Comments (3)

Wednesday, October 07, 2015

EPA Required to Muscle Out Invasive Zebra Mussels - Can it Be Done?

This Monday as I was preparing to teach my Tuesday Biodiversity seminar, in which we were to discuss invasive species, the Second Circuit issued an important Clean Water Act opinion. For years the EPA had been avoiding the significant challenge of dealing with invasive species routinely dumped into our nation's waters by cargo ships. When the ships load and unload their cargo, it is necessary to balance the weight of the ship by filling or emptying massive tanks of water within the vessel. This water (called ballast water) is typically drawn into the tanks in one location and expelled in another, carrying along numerous stowaway species ready to invade new territory. This practice has introduced many microscopic pathogens, but the poster child is undoubtedly the zebra mussel, which has taken over the great lakes ecosystem. In addition to causing ecological harm, the zebra mussels have cost hundreds of millions of dollars to the companies whose industrial water pipes have been clogged by the Asian mussels.

The Clean Water Act makes it unlawful to discharge a pollutant into the nation's waters without a permit. The EPA has no discretion to exempt categories of discharges from this permitting requirement, as the DC Circuit held way back in NRDC v. Costle, 568 F.2d 1369 (D.C. Cir. 1977). More recently, in 2008, the Ninth Circuit struck down the EPA's attempt to exempt ballast water from the CWA requirements, in Northwest Environmental Advocates v. EPA, 537 F.3d 1006 (9th Cir. 2008), a case I had just happened to assign for this week's class. So, I was pleased in more ways than one to see the Second Circuit issue its opinion in NRDC v. EPA just 24 hours before our class met to discuss this very issue. Having failed in its attempt to exempt ballast water entirely from permitting requirements, EPA had generated a lenient Vessel General Permit, which the court this week struck down as a violation of the CWA. The permit failed to be strict enough both as to technological requirements for treating ballast water and as to limits on the invasive species discharged.

While exciting for environmentalists, this ruling will be quite challenging for the shipping industry. Many of the most cutting edge technologies for killing everything in ballast water tanks is easier to design into new ships than to add via retrofitting older ones. Of course, we have a very serious invasive species problem, so to address it, step one is obviously to stop introducing them. There is no question that this red light is incredibly valuable to the environment. What is less clear, though, is whether we can ever actually accomplish the underlying goal of such regulation, which would be to restore the ecosystem and stop the economic harm. In forcing the EPA to regulate ballast water, the Northwest Environmental Advocates Court noted that "[o]nce established, invasive species become almost impossible to remove," in part because they can become so successful absent their natural predators.

So this decision raises the important question of what's next. Assuming we can cut down on the continued delivery of invasive species into our waterways, will we maximize the value of that effort and sacrifice by also working to eradicate the massive population already present? Can we do this?

Posted by Kalyani Robbins on October 7, 2015 at 10:26 PM in Current Affairs, Science | Permalink | Comments (0)

Wednesday, July 15, 2015

"We Begin with the Assumption that Contracts Matter...."

GULATI 0375292One of my reads this summer, because it's relevant to my piece on "lexical opportunism," has been a fascinating little book by Mitu Gulati (Duke, left) and Robert Scott (Columbia, right), The 3 1/2 Minute Transaction: Boilerplate and the Limits of Contract Design (Chicago, 2012). The subject matter is a puzzler: why did sophisticated law firms keep including a particular contract provision (the "pari passu" clause) in sovereign debt agreements when (a) almost nobody could present a credible explanation of its purpose, and (b) a highly publicized case affirmed an interpretation of the clause that threatened to undermine all attempts to restructure sovereign debt? Scott new 9-09

Let me start with words of praise. This is a good read and good work. Anybody seriously looking at issues in contract theory ought to be reading it. But it's refreshing to read the results of an academic, empirical piece where the authors are so frank about their bemusement and their inability to come up with a satisfying explanatory theory. Professors Gulati and Scott come at the problem with a neoclassical economic perspective, and find that "these hard-nosed Wall Street lawyers told us stores about rituals, talismans, alchemy, the search for the Holy Grail, and Zeus." (5)  It's pretty clear 173 pages later they'd agree that the conclusion - sticky boilerplate and herd behavior - is a whimper rather than a bang.

I confess that Ayn Rand's The Fountainhead and Atlas Shrugged were staples of my intellectual youth. I've since come to terms with some of the hokum and inherent contradictions in the philosophy (she hated Kant, and I kind of know why - her response to the limits of reason was to opt for an orthodoxy of logic, including the foundational posits that logic requires), but many of her bon mots come back to me at opportune times.  The apropos quote here is from Francisco d'Anconia to Dagny Taggart: "Contradictions do not exist. Whenever you think that you are facing a contradiction, check your premises. You will find that one of them is wrong."

So.... One of the fundamental puzzles for Gulati and Scott is why sovereigns incur any costs toward lowering the cost of capital by way of contract design, and yet economists seem to think that contract design is irrelevant. The bridge from that to their assessment begins as follows: "In any case, as contracts scholars, we begin with the assumption that contracts matter." (23)

That bothers me.  Let's try these variants.  "As philosophers, we begin with the assumption that metaphysics matter." "As human anatomy scholars, we begin with the assumption that appendixes matter." "As physicists, we begin with the assumption that phlogiston matters." What's going on is a demonstration of the subtle ways in which descriptive theory has a normative component, even if the normative element is as basic as something like "this activity should be amenable to explanation by way of theory." If you start with neo-classical welfare-maximizing as the default in human decision-making - i.e., ceteris paribus, that's how the world ought to operate - no wonder it's a puzzle when it doesn't turn out to work that way. (I'm not sure if old Ayn ever got to the part of the Critique of Pure Reason that works through this - it's buried in an Appendix to the Transcendental Dialectic, beginning at pages A643/B671.)

If we check our premises, maybe contracts don't matter.

Posted by Jeff Lipshaw on July 15, 2015 at 07:44 AM in Article Spotlight, Books, Lipshaw, Science | Permalink | Comments (1)

Tuesday, May 05, 2015

Do Students Perform Better When Your Test Is a "Little Bit Harder" to Read?

It's an unusual exam-time question.  But according to this newly-released study, the answer is "no."

The question was prompted by a fascinating, well-publicized experiment in 2007 that found that people score higher on tests when the questions are very hard to read. When students took a particular test with a normal font, 90% made at least one mistake on the test. But that proportion dropped to 35% when the font was barely legible.  The experiment received a lot of attention.  It has been cited in over 130 articles, and Nobel Laureate Daniel Kahneman highlighted the findings in his 2011 book, Thinking, Fast and Slow. Malcolm Gladwell similarly emphasized the benefits of tests that are "just a little bit harder to read."

The idea that difficult tasks can "kick our brains into higher gear" is consistent with many ideas in cognitive psychology.  Cognitive psychologists have identified two kinds of decisionmaking processes: intuitive and deliberative.  Intuitive decisionmaking processes, called System I processes, are intuitive, automatic, and quick, encompassing the types of instantaneous judgments that permit a person to immediately size up a situation. Deliberative processes, or System II processes, describe reflective, logical, and self-conscious thinking. See Adam S. Zimmerman, Funding Irrationality, 59 Duke L.J. 1105 (2010).  Kahneman and others have long suggested that deliberative processing can “override” System I processes under certain circumstances--which is why people are less susceptible to cognitive errors or biases when there are opportunities to learn from experience or when they have access to third-party expertise, like lawyers, doctors, or other specialists.  For that reason, regulatory efforts to "de-bias through law" often rely on rules that encourage people to reflect or deliberate more about their choices to improve welfare. 

After surveying results from over 7,000 people, however, Terry Burnham, Shane Frederick, Andrew Meyers and eight other co-authors, however, appear to have refuted this particular study.  The paper appears in the April 2015 Journal of Experimental Psychology.    

Posted by Adam Zimmerman on May 5, 2015 at 02:35 PM in Science | Permalink | Comments (2)

Monday, February 02, 2015

Measles!

First, I am delighted to be back on Prawfblawgs and want to thank Howard and the team very much for coordinating this.  It’s wonderful to see how what Dan started continues to grow and thrive.

Second, in thinking about how to make best use of my time I’ve decided to focus on public health law--to shed some light on the ever-present conflict between an individual's right to manage her own health and the government (state and federal) ability to interfere.

 As everyone knows, we in the United States are in the middle of an outbreak of measles that started when two un-vaccinated children who had been exposed to measles visited Disneyland.   My focus will be on legal issues, but lets start with an overview.  As of today, there are 102 cases reported in 14 states-anyone interested in tracking the outbreak can so here.  Measles is that “worst case scenario” virus that Ebola wasn’t—it is highly contagious, spreads through the air, can live a long time on surfaces, and is infectious well before people feel sick enough to stay at home.  This is a very helpful graphic.  In 2000 measles was “declared eliminated in the United States” because, for an entire calendar year, there had not been a case of one person catching measles from another in the United States.   But measles is nowhere near eliminated globally and we haven't had a year like 1999 in a long time.   Globally,  400 (mostly) children die of measles every day, 16 die every hour.   Unfortunately, “globally” does not, in measles’s case, mean remote areas of the planet, Europe, India the Philippines and Vietnam—are all seeing increases in measles cases.  

Also, over the past 5 years, an increasing number of people (mostly college students) have caught  measles and mumps (and both) without the infectot or the infectee leaving their U.S.  college campus.

 The good news about measles is that there is a highly effective, widely available vaccine that fully protects 97 out of every 100 people vaccinated.  It’s a “threefer” in that the vaccine provides immunity from not just Measles but two other very serious viruses, Rubella (German measles) and Mumps.

 Like most vaccines, however, it can’t be given to infants younger than six months old and in the absence of an immediate threat, usually isn’t given until a child is twelve months old.  There are also counter-indications (more about them later) about who shouldn’t get the vaccine.  Finally, people on chemotherapy or who have had bone marrow transplants lose whatever immunity they had before.   Without doing the math that means at any one time, even if every person in the United States eligible to vaccinated had one, many people would still be susceptible to infection.  And of course the point of this post on a law site, is that far from everyone eligible to be vaccinated has taken advantage of the opportunity.

 

The current controversy is a great teachable moment for any law school class considering the balance between the rights of an individual and that of the state.    Over the next month, I will be diving deeper into this area of the law to examine the parameters of state authority under the Tenth Amendment and then the different aspects of federal power that create the parameters of governmental authority to prevent, and control outbreaks through public health measures like mandatory vaccination, treatment, quarantine and isolation.  Spoiler alert—neither sincerely held religious belief nor autonomy to raise one’s children have prevailed against a state’s interest in requiring vaccination for attending public school.

To be continued.

Posted by Jennifer Bard on February 2, 2015 at 03:10 PM in Constitutional thoughts, Current Affairs, First Amendment, International Law, Law and Politics, Religion, Science, Teaching Law | Permalink | Comments (1)

Thursday, January 15, 2015

Chasing the Dragon in the Shadow of the OX

The numbers are in and it is official: deaths from heroin overdoses in much of the United States have doubled in the past two years. Whether the heroin was injected or smoked ("chasing the dragon"), there is some evidence that, in many places, heroin has increased in both availability and purity in the same time period.

How to explain this?

One school of thought -- I'll call it the opiate demand substitutability school of analysis -- tracks the increase in heroin's popularity to the increased difficulty addicts are reported to be having in accessing oxycodone ("OX") in light of state and federal efforts to reduce prescription drug abuse. The street value of OX has increased (at least the street value of original formulation OX has increased, while the street value of OX in the resistant to crushing and snorting format has actually gone down) and there is anecdotal evidence from treatment centers for injectable drug users that the migration from OX to heroin is well underway.

Another school of thought -- I'll call it the progression of addiction through the population school of analysis -- is that prescription drug abuse, particularly in the 18-25 age group, is still rampant but the increase in heroin overdose fatalities demonstrates a cohort of aging opiate addicts moving through the progression of addiction, seeking an ever cheaper and more powerful high.  This might explain the high demand for heroin of a purity previously not well known in the United States.

Whichever theory you subscribe to -- and some thoughtful addiction specialists subscribe to both-- the increased death rate from opiate overdose is data playing out as the back story to our ongoing debate over the wisdom and utility of providing naloxone (the antidote for heroin overdose) for emergency use. Some states have now approved the training of and distribution to  first responders and lay people of naloxone for just this use.

But we are conflicted. Is naloxone a step toward condoning use? If the overdose death rate is lower where heroin is both safe and accessible, is naloxone's arrival just a further expression of our own ambivalence about treatment for addiction?

 

 

Posted by Ann Marie Marciarille on January 15, 2015 at 12:30 AM in Culture, Current Affairs, Science | Permalink | Comments (0)

Monday, January 12, 2015

The Art of Saving a Life

Perhaps you saw the recent New York Times Arts Section review of the vaccination promotion campaign sponsored by the Bill and Melinda Gates Foundation.  The campaign, as part of an international effort to raise funds to inoculate millions, has commissioned artists to interpret the "Vaccines Work"  tag line.

The article was accompanied by the reproduction of three of the remarkable commissioned pieces, but it was Alexia Sinclair's tableau of a 18th century vaccination that caught my eye. A young boy is clearly receiving the innoculation from a bewigged doctor while the mother -- detached and yet attached -- sits apart  and  looking away from the tableau while also reaching out to reinforce the doctor's acts with an almost yearning reach of her hand. All of them sit in a fine 18th century sitting room, yet the carpet of grass and blossoms -- we are told of the artist's vision -- was meant to symbolize the virulence of smallpox.  "It brings a fashion-y aesthetic to a virulent disease" the New York Times notes.

Smallpox is not pretty.  But the asethetic of the Sinclair tableau is not exactly beautiful, more  profoundly eerie. I wonder if it doesn't also tap into our modern anxieties about vaccination.  It is, after all, an act of faith to vaccinate, then as now.

If you visit "The Art of Saving a Life" website you find Alexia Sinclair's tableau titled "Edward Jenner's Smallpox Discovery."   Edward Jenner, sometimes known as the father of immunization, did not discover the smallpox vaccination, however.  He was, rather, the first person to confer scientific status on the procedure and to pursue its scientific validation. Vaccinated against smallpox himself as a young boy, he spent some of his prodigious talents attempting to validate the mikmaids' truism that exposure to cowpox meant immunity to smallpox.

Seen from this perspective, eight year old James Phipps (Edward Jenner's first human subject) and Sarah Nelms (the milkmaid donor of cow pox for transfer to James Phipps) ought be in Alexia Sinclair's interpretation of Edward Jenner's smallpox discovery.

 

 

Posted by Ann Marie Marciarille on January 12, 2015 at 01:19 AM in Blogging, Current Affairs, Science | Permalink | Comments (1)

Monday, October 27, 2014

Ebola: A Problem of Poverty rather than Health

Undoubtedly, the death toll in West Africa would be much lower if Guinea, Liberia, and Sierra Leone had better health care systems or if an Ebola vaccine had been developed already. But as Fran Quigley has observed, Ebola is much more a problem of poverty than of health. Ebola has caused so much devastation because it emerged in countries ravaged by civil wars that disrupted economies and ecosystems.

Ultimately, this Ebola epidemic will be contained, and a vaccination will be developed to limit future outbreaks. But there are other lethal viruses in Africa, and more will emerge in the coming years. If we want to protect ourselves against the threat of deadly disease, we need to ensure that the international community builds functioning economies in the countries that lack them.

Our humanitarian impulses in the past have not been strong enough to provide for the needs of the impoverished across the globe. Perhaps now that our self-interest is at stake, we will do more to meet the challenge.

[cross-posted at Bill of Health and Health Law Profs]

Posted by David Orentlicher on October 27, 2014 at 10:09 AM in Current Affairs, Science | Permalink | Comments (0)

Saturday, October 25, 2014

The Ebola "Czar"

In the wake of Craig Spencer’s decision to go bowling in Brooklyn, governors of three major states—Illinois, New Jersey, and New York—have imposed new Ebola quarantine rules that are inconsistent with national public health policy, are not likely to protect Americans from Ebola, and may compromise the response to Ebola in Africa, as health care providers may find it too burdensome to volunteer where they are needed overseas. Don’t we have an Ebola czar who is supposed to ensure that our country has a coherent and coordinated response to the threat from Ebola?

Of course, the term “czar” was poorly chosen precisely because Ron Klain does not have the powers of a czar. He will oversee the federal response to Ebola, but he cannot control the Ebola policies of each state. Unfortunately, on an issue that demands a clear national policy that reflects medical understanding, public anxieties will give us something much less desirable.

[cross posted at Bill of Health and Health Law Profs]

Posted by David Orentlicher on October 25, 2014 at 05:33 PM in Current Affairs, Law and Politics, Science, Travel | Permalink | Comments (1)

Friday, October 17, 2014

Egg Freezing and Women's Decision Making

The announcement by Apple and Facebook that they will cover the costs of egg freezing predictably provoked some controversy—predictably because it involves reproduction and also because too many people do not trust women to make reproductive decisions.

Interestingly, the challenge to women’s autonomy can come from both sides of the political spectrum, as has happened with several assisted reproductive technologies. Scholars on the left criticized surrogate motherhood on the ground that surrogates were exploited by the couple intending to raise the child, and other new reproductive technologies are criticized on the grounds that women will feel obligated to use them rather than free to use them. Indeed, this concern about coercion drives some of the objections to egg freezing.

Some women freeze their eggs because they face infertility from cancer chemotherapy; other women may not have found a life partner and want to suspend their biological clock until that time comes.

But some observers worry that with the option of egg freezing, some women will succumb to the pressures of the workplace and choose egg freezing not because they really want to but because they feel that have to. After all, if a woman can delay procreation and put in long hours at the office, why shouldn’t she do so? Employers might think that women who forgo egg freezing are not really committed to their jobs.

These concerns are legitimate, but are people too willing to invoke them? Egg freezing is not a simple procedure, nor is its success a certainty. Even if covered by insurance, women are not likely to choose egg freezing lightly. We should worry that egg freezing critics may be too ready to question the decision making capacity of women contemplating their reproductive choices.

[cross-posted at Bill of Health and Health Law Profs

Posted by David Orentlicher on October 17, 2014 at 02:51 PM in Culture, Current Affairs, Science | Permalink | Comments (4)

Thursday, October 09, 2014

Uterus Transplants?

While controversial among some ethics experts, uterus transplantation has been performed several times, most commonly in Sweden. A few weeks ago, a mother for the first time gave birth to a baby gestated in a transplanted uterus.

Should we worry about uterus transplants? Transplanting life-extending organs, like hearts, livers, lungs and kidneys, has become well-accepted, but observers have raised additional questions about transplantation for life-enhancing body parts like faces and hands. As long as transplant recipients have their new organs, they must take drugs to prevent their immune systems from rejecting the transplanted organs. The risks can be substantial. For example, the immunosuppressive drugs put people at an increased risk of cancer. It is one thing to assume health risks for the possibility of a longer life, but are the risks of being a transplant recipient justified by improvements in the quality of life?

We always should worry about risks from novel treatments, but the risks seem quite tolerable for uterus transplantation. Over time, scientific advances have reduced the side effects from immunosuppression. The risks are not as serious as they used to be. In addition, a transplanted uterus can be removed after childbirth, avoiding the need for long-term immunosuppression that exists with other kinds of transplants. Finally, we generally allow patients to weigh the benefits and risks of medical treatment for themselves. Absent a disproportionate balance between risks and benefits, it is not appropriate for society to usurp health care decision making from patients. Hence, face and hand transplants are becoming more common even though they do not prolong life.

Of course, with uterus transplants, we also have to worry about the risks to the child from the drugs that the mother must take to prevent her body's immune system from rejecting the transplanted uterus. On that score, we have reassuring data. Recipients of kidneys, livers, and other organs take the same immunosuppressive drugs as do recipients of a uterus transplant, and more than 15,000 children have been born to transplant recipients since the 1950’s.

Though not definitive, the data are generally reassuring. While children exposed to immunosuppressive drugs during pregnancy are more likely to have a premature birth and low birth weight, they do not appear to be at elevated risk of physical malformations or other serious side effects. Moreover, it is generally difficult to argue that people should not reproduce because of the health risks to their offspring. Procreation is a right of fundamental importance and should be recognized for all persons, even if they may pass a serious disease to their children. Thus, for example, it is acceptable for women to reproduce when they are infected with HIV or carry the gene for a severe inherited disorder.

Can't women rely on gestational surrogacy instead of a uterus transplant? This may work for many women, but not in locales where gestational surrogacy is prohibited. Moreover, the legal battles that can follow gestational surrogacy illustrate the risks of that alternative, as well as the significant role that gestation plays in forming motherhood.

There are many important reasons why women want to bear their own children. Women may want to have children with their chosen partner and without the involvement of third parties (an interest considered in this article). They may want to benefit from the ties with their children that develop during pregnancy. For these and other reasons, we should be careful not to be overly skeptical of uterus transplants. 

[cross-posted at Health Law Profs and orentlicher.tumblr.com]

Posted by David Orentlicher on October 9, 2014 at 11:43 AM in Current Affairs, Gender, Science | Permalink | Comments (2)

Wednesday, October 08, 2014

Too Much Information? GM Food Labeling Mandates

As NPR reported yesterday, voters in Colorado and Oregon will decide next month whether foods with genetically-modified (GM) ingredients should be identified as such with labeling. And why not? More information usually is better, and many people care very much whether they are purchasing GM foods. Moreover, it is common for the government to protect consumers by requiring disclosures of information. Thus, sellers of securities must tell us relevant information about their companies, and sellers of food must tell us relevant information about the nutritional content of their products.

Nevertheless, there often are good reasons to reject state-mandated disclosures of information to consumers. Sometimes, the government requires the provision of inaccurate information, as when states require doctors to tell pregnant women that abortions result in a higher risk of breast cancer or suicide. At other times, the government mandates ideological speech, compelling individuals to promote the state’s viewpoint. Accordingly, the First Amendment should prevent government from requiring the disclosure of false or misleading information or of ideological messages. (For discussion of abortion and compelled speech, see this forthcoming article.)

What about GM labeling?

Is this similar to requiring country-of-origin labeling for meat and produce, a policy upheld by the D.C. Circuit earlier this year? GM labeling likely will mislead more than inform. Many people harbor concerns about genetic modification that are not justified by reality. In particular, as the NPR report indicated, researchers have not found any risks to health from eating GM foods. Indeed, genetic modification can promote better health, as when crops are fortified with essential vitamins or other nutrients. For very good reasons, GM foods run throughout the food supply, whether from traditional forms of breeding or modern laboratory techniques. Thus, the American Association for the Advancement of Science has concluded that GM labeling “can only serve to mislead and falsely alarm consumers.”

[cross-posted at Health Law Profs and orentlicher.tumblr.com]

Posted by David Orentlicher on October 8, 2014 at 12:47 PM in Culture, Current Affairs, First Amendment, Food and Drink, Science | Permalink | Comments (29)

Monday, June 16, 2014

Looks like President O got an early start on that coconut

After the next inauguration, quipped President Obama in a hipster Tumblr interview today, he says he'll "be on the beach somewhere, drinking out of a coconut . . ."  Maybe sooner than that, as the president proclaims at the beginning of the interview:  "We have enough lawyers, although it's a fine profession.  I can say that because I'm a lawyer."

So "don't go to law school" is the message he wants to get across.  Larger debate, of course.  But let's see what he says right afterward.  Study STEM fields, he insists, in order to get a job after graduation.  STEM study, yes indeed.  But STEM trained grads often look beyond an early career as a bench scientist or an IT staffer, or a mechanical career or . . . that is, STEM-trained young people look to leverage these skills to pursue significant positions in corporate or entrepreneurial settings.  Hence, they look for additional training in business school, in non-science master's programs, and, yes, even in law schools

Tumblr promises #realtalk, so here is some real talk:  Significant progress in developing innovative projects and bringing inventions to market require a complement of STEM, business, and legal skills.  These skills are necessary to negotiate and navigate an increasingly complex regulatory environment and to interacts with lawyers and C-suite executives as they develop and implement business strategy.   Perhaps too many lawyers, but not too many lawyers who are adept at the law-business-technology interface.  "Technology is going to continue to drive innovation," wisely insists President Obama.  But it is not only technology that is this driver, but work done by folks with a complement of interdisciplinary skills and ambition.

Posted by Dan Rodriguez on June 16, 2014 at 07:29 PM in Information and Technology, Science, Web/Tech | Permalink | Comments (11)

Monday, June 09, 2014

Decline of Lawyers? Law schools quo vadis?

My Northwestern colleague, John McGinnis, has written a fascinating essay in City Journal on "Machines v. Lawyers."  An essential claim in the article is that the decline of traditional lawyers will impact the business model of law schools -- and, indeed, will put largely out of business those schools who aspire to become junior-varsity Yales, that is, who don't prepare their students for a marketplace in which machine learning and big data pushes traditional legal services to the curb and, with it, thousands of newly-minted lawyers.

Bracketing the enormously complex predictions about the restructuring of the legal market in the shadow of Moore's Law and the rise of computational power, let's focus on the connection between these developments and the modern law school.

The matter of what law schools will do raises equally complex -- and intriguing -- questions.  Here is just one:  What sorts of students will attracted to these new and improved law schools?  Under John's description of our techno-centered future, the answer is this: students who possess an eager appreciation for the prevalence and impact of technology and big data on modern legal practice.  This was presumably include, but not be limited to, students whose pre-law experience gives them solid grounding in quantitative skills.  In addition, these students will have an entrepreneurial cast of mind and, with it, some real-world experience -- ideally, experience in sectors of the economy which are already being impacted by this computational revolution.  Finally, these will be students who have the capacity and resolve to use their legal curriculum (whether in two or three years, depending upon what the future brings) to define the right questions, to make an informed assessment of risk and reward in a world of complex regulatory and structural systems, and, in short, to add value to folks who are looking principally at the business or engineering components of the problem.

Law remains ubiquitous even in a world in which traditional lawyering may be on the wane.  That is, to me, the central paradox of the "machines v. lawyers" dichotomy that John draws.  He makes an interesting, subtle point that one consequence of the impact of machine learning may be a downward pressure on the overall scope of the legal system and a greater commitment to limited government.  However, the relentless movement by entrepreneurs and inventors that has ushered in this brave new big data world has taken place with and in the shadow of government regulation and wide, deep clusters of law.  The patent system is just one example; the limited liability corporation is a second; non-compete clauses in Silicon Valley employment contracts is a third.  And, more broadly, the architecture of state and local government and the ways in which it has incentivized local cohorts to develop fruitful networks of innovation, as the literature on agglomeration economics (see, e.g., Edward Glaeser and David Schleicher for terrific analyses of this phenomenon).  This is not a paean to big govenment, to be sure.  It is just to note that the decline of (traditional) lawyers need not bring with it the decline of law which, ceteris paribus, makes the need for careful training of new lawyers an essential project.

And this brings me to a small point in John's essay, but one that ought not escape our attention.  He notes the possibilities that may emerge from the shift in focus from training lawyers to training non-layers (especially scientists and engineers) in law.  I agree completely and take judicial notice of the developments in American law schools, including my own, to focus on modalities of such training.  John says, almost as an aside, that business schools may prove more adept at such training, given their traditional emphasis on quantitative skills.  I believe that this is overstated both as to business schools (whose curriculum has not, in any profound way, concentrated on computational impacts on the new legal economy) and as to law schools.  Law schools, when rightly configured, will have a comparative advantage at educating students in substantive and procedural law on the one hand and the deployment of legal skills and legal reasoning to identify and solve problems.  So long as law and legal structures remain ubiquitous and complex, law schools will have an edge in this regard. 

Posted by Dan Rodriguez on June 9, 2014 at 10:19 AM in Information and Technology, Life of Law Schools, Science | Permalink | Comments (2)

Thursday, May 01, 2014

Introduction

Hello—and thank you to Dan and PrawfsBlawg for inviting me to guest this month!

My name is Jennifer Bard and I am a Professor at Texas Tech University School of Law where, among other things, I direct our Health Law Program.  I’ve been blogging in the “Profs” family at HealthLawProfs and more recently also at the Harvard Bill of Health.  My research interests include legal & ethical issues in conducting research, the effect of increasing knowledge about the brain on the legal response to criminal conduct, and the intersection between Constitutional Law and the regulation of health care delivery and finance.  Here’s where you can find some things I’ve published.

Over the next month, I look forward to blogging about issues I’ve been thinking about a lot including the future of legal education—both in terms of curricular reform and addressing the substantial challenges facing us about the cost of law school and the rapidly changing job market, current issues in higher education, and of course on-going developments in health law. 

My thinking has been shaped a lot by two degrees I got after law school.  The first was a master’s of public health which gave me the “prevention” model of solving.  The big idea in public health is that it’s always easier to prevent a problem than to solve one—but first you need to understand its causes.  The second is a Ph.D. in Higher Education that introduced me to the much larger theoretical and regulatory context in which legal education occurs. 

This is a time of significant change in higher education as it faces close scrutiny from consumers and the state and federal governments representing them.   For example, on Monday President Obama issued a report calling for substantial changes to the way universities both prevent and respond to sexual harassment and sexual assault.  Here is the first PSA to come from the White House on this topic.  Although law schools often see themselves as autonomous islands within the larger university, we are all going to see the effects of this and other related campaigns.

Posted by Jennifer Bard on May 1, 2014 at 12:25 PM in Blogging, Constitutional thoughts, Current Affairs, Life of Law Schools, Science, Teaching Law | Permalink | Comments (1)

Tuesday, April 15, 2014

A (Limited) Defense of Saving Players for "Crunch Time"

If you love sports and you’re interested in empirical methodology, the last ten-plus years (call it the Moneyball Era) have been very good indeed. The increase in attention to statistical studies of sports has grown a ton (though of course it has much longer roots that date at least back to Bill James and early sabermetrics in the late 70s). 

One of the most interesting parts of this movement has been to do what good research so often does: Take a longstanding belief and show that it’s nothing more than smoke and mirrors. For instance, does icing the kicker work? According to this study, the answer is simple: Nope (not that it’s stopped NFL coaches from doing it, of course).

Consider as well the practice in basketball games of sitting players early on so that they will be available (and not in foul trouble) when it’s late in the game and “crunch time” arrives. As many people, including Richard Thaler, have argued, this strategy is probably counterproductive because you get just as many points for baskets scored early in a game as you do during late-game moments, so that sitting players to save them for late-game heroics probably just means you’re shortening their total on-court minutes to the team’s detriment.

The point of this post is not to propound a full defense of the crunch time strategy. This is because I think it’s basically right that basketball coaches are too cautious with saving players for late-game situations, and would probably do better to just max out their points earlier on even if that meant more players would foul out.

The point of this post, rather, is to point out one reason why the story of the crunch time strategy may be more complicated, and somewhat more compelling, than its critics have let on. I elaborate this point below the fold.

To start with an orthogonal observation, the strategy of sitting basketball (and, for what it's worth, hockey) players periodically throughout a game is not only to save them for crucial late-game moments. It also is a necessity (or at least is very advisable) given the highly intense pace of basketball. If you didn’t give cagers regular breaks, by the end of the fourth quarter (or earlier) even the fittest players would be totally gassed, regardless of whether they were close to disqualification via foul accumulation. 

That point aside, though, consider a reason that the crunch time strategy might not be a total loss. First, the critique of the strategy assumes that players are equally likely to score baskets throughout a game. If they are, then it makes the most sense to just maximize their on-court time, regardless of whether that court time occurs early or late in a game. 

But if players’ likelihood of scoring is not constant, and in particular if players are more likely to score later in games, then saving them for the times when they tend to be more productive may be a good strategy. This sort of discontinuity in scoring aptitude is plausible—indeed, one of the hallmarks of what makes a player great may be their tendency to perform well in late-game high-pressure situations.

A related point is that great players have the dynamic effect of making others around them better, either through abstract qualities like inspiring leadership or more concrete ones like making good passes, setting effective screens, etc. These dynamic effects of a star player on their team could also vary throughout a game, and if they were greater at the end of a game, then reserving a star player’s minutes to allocate them later in a game could make more sense than crunch-time critique acknowledges. 

This is, of course, only a limited defense of the crunch time strategy. This post has sought to add one underappreciated possible reason that sitting players early in a contest in order to save them for later-game moments might make more sense than the prevailing critique of the crunch time strategy lets on. And since, as I observed above, sitting players to some extent at intervals throughout a game is inevitable, it’s not possible to just play your best players until they successively foul out, even if this were the optimal strategy.

So given that it is necessary in basketball to sit players periodically throughout a game, one factor that might help craft the optimal strategy for when to sit players would be their likelihood of performing well later in a game (which is, of course, different than the prevailing wisdom that stars should always be saved for “crunch time”). And it bears noting that even if a given star performed marginally better later in games, that slight advantage might well not be great enough to justify reducing his overall on-court minutes by sitting him out earlier in the game. 

This is, though, as the man says, an empirical question with an empirical answer. Do stars actually perform better later in games? Perhaps it’s true that some stars do while others tend to wilt under the perceived pressure. And why limit the inquiry to star players? It could be that all players' performance varies differently throughout a game, which could help a coach figure out when it's optimal to put anyone on the court. The broader point is that while we look at players' statistics as constant given that most basketball stats are based on games (points per game, assists per game, etc.), that may mask discontinuities in when during a game players are at their best, and that uncovering patterns in these discontinuities may be a strategically helpful insight.

Posted by Dave_Fagundes on April 15, 2014 at 11:45 AM in Science, Sports | Permalink | Comments (7)

Tuesday, May 21, 2013

Sperm Donation, Anonymity, and Compensation: An Empirical Legal Study

In the United States, most sperm donations* are anonymous. By contrast, many developed nations require sperm donors to be identified, typically requiring new sperm (and egg) donors to put identifying information into a registry that is made available to a donor-conceived child once they reach the age of 18. Recently, advocates have pressed U.S. states to adopt these registries as well, and state legislatures have indicated openness to the idea.

In a series of prior papers I have explained why I believe the arguments offered by advocates of these registries fail. Nevertheless, I like to think of myself as somewhat open-minded, so in another set of projects I have undertaken to empirically test what might happen if the U.S. adopted such a system. In particular, I wanted to look at the intersection of anonymity and compensation, something that cannot be done in many of these other countries where compensation for sperm and egg donors is prohibited.

Today I posted online (downloadable here) the first published paper from this project,Can You Buy Sperm Donor Identification? An Experiment, co-authored with Travis Coan, and forthcoming in December 2013 in Vol. 10, Issue 4, of the Journal of Empirical Legal Studies.

This study relies on a self-selected convenience sample to experimentally examine the economic implications of adopting a mandatory sperm donor identification regime in the U.S. Our results support the hypothesis that subjects in the treatment (non-anonymity) condition need to be paid significantly more, on average, to donate their sperm. When restricting our attention to only those subjects that would ever actually consider donating sperm, we find that individuals in the control condition are willing-to-accept an average of $$43 to donate, while individuals in the treatment group are willing-to-accept an aver-age of $74. These estimates suggest that it would cost roughly $31 per sperm donation, at least in our sample, to require donors to be identified. This price differential roughly corresponds to that of a major U.S. sperm bank that operates both an anonymous and identify release programs in terms of what they pay donors.

We are currently running a companion study on actual U.S. sperm donors and hope soon to expand our research to egg donors, so comments and ideas are very welcome online or offline.

* I will follow the common parlance of using the term "donation" here, while recognizing that the fact that compensation is offered in most cases gives a good reason to think the term is a misnomer.

- I. Glenn Cohen

 

Posted by Ivan Cohen on May 21, 2013 at 01:53 PM in Article Spotlight, Culture, Current Affairs, Peer-Reviewed Journals, Science | Permalink | Comments (5) | TrackBack

Wednesday, May 01, 2013

Sleep No More: Sleep Deprivation, Doctors, and Error or Is Sleep the Next Frontier for Public Health?

How often do you hear your students or friends or colleagues talk about operating on very little sleep for work or family reasons? In my case it is often, and depending on the setting it is sometimes stated as a complaint and sometimes as a brag (the latter especially among my friends who work for large law firms or consulting firms). To sleep 7-8 hours is becoming a “luxury” or perhaps in some eyes a waste – here I think of the adage “I will sleep when I am dead” expresses that those who need sleep are “missing out” or “wusses.” My impression, anecdotal to be sure, is that our sleep patterns are getting worse not better and that many of these bad habits (among lawyers) are learned during law school.

One profession that has dealt with these issues at the regulatory level is medicine. In July 2011, the Accreditation Council for Graduate Medical Education (ACGME) – the entity Responsible for the accreditation of post-MD medical training programs within the United States – implemented new rules that limit interns to 16 hours of work in a row, but continue to allow 2nd-year and higher resident physicians to work for up to 28 consecutive hours. In a new article with sleep medicine expert doctors Charles A. Czeisler and Christopher P. Landrigan that just came out in the Journal of Law, Medicine, and Ethics, we examine how to make these work hour rules actually work.

As we discuss in the introduction to the article 

Over the past decade, a series of studies have found that physicians-in-training who work extended shifts (>16 hours) are at increased risk of experiencing motor vehicle crashes, needlestick injuries, and medical errors. In response to public concerns and a request from Congress, the Institute of Medicine (IOM) conducted an inquiry into the issue and concluded in 2009 that resident physicians should not work for more than 16 consecutive hours without sleep. They further recommended that the Centers for Medicare & Medicaid Services (CMS) and the Joint Commission work with the Accreditation Council for Graduate Medical Education (ACGME) to ensure effective enforcement of new work hour standards. The IOM’s concerns with enforcement stem from well-documented non-compliance with the ACGME’s 2003 work hour rules, and the ACGME’s history of non-enforcement. In a nationwide cohort study, 84% of interns were found to violate the ACGME’s 2003 standards in the year following their introduction.

Whether the ACGME's 2011 work hour limits went too far or did not go far enough has been hotly debated. In this article, we do not seek to re-open the debate about whether these standards get matters exactly right. Instead, we wish to address the issue of effective enforcement. That is, now that new work hour limits have been established, and given that the ACGME has been unable to enforce work hour limits effectively on its own, what is the best way to make sure the new limits are followed in order to reduce harm to residents, patients, and others due to sleep-deprived residents? We focus on three possible national approaches to the problem, one rooted in funding, one rooted in disclosure, and one rooted in tort law. I would love reactions to our proposals in the paper, but wanted to float the more general idea in this space.

 Obesity is a good example of something that through concerted action moved from the periphery to safely within the confines of public health thinking and even public health law. Is it time to do the same for sleep? Should we stop valorizing sleeping very little in our society? Should we be thinking about corporate and public policies directed to improving sleep pattern? What might that look like? One thought I have is about encouraging telecommuting to reduce commuting time, sleep rooms in offices? Of course, on the parenting sleeplessness sides many of the solutions are social support.  What about what we tell and model for our students? I try to impart to my students that extra hours spent studying well into the night will have diminishing marginal returns, but who knows if that message is imparted. I also worry that with the number of journals, moot courts, clubs, etc, we encourage our students to join at law school that we are enablers of sleeping too little and perpetuating the “superman” myth (and I do wonder about the gendered component here)... Real men don’t sleep. And then they perform badly at their jobs and get into car crashes….

- I. Glenn Cohen

Posted by Ivan Cohen on May 1, 2013 at 12:30 PM in Article Spotlight, Corporate, Science, Teaching Law | Permalink | Comments (5) | TrackBack

Wednesday, April 24, 2013

Transplant Tourism: Hard Questions Posed by the International and Illicit Market for Kidneys

The Journal of Law, Medicine, and Ethics has just published an article by me on transplant tourism, that discusses the burgeoning international market for buying and selling kidneys. I review the existing data from Pakistan, Bangladesh, and India, which is pretty deplorable. As I show the vast majority of these sellers are poor and using the money (which is a significnat sum in terms of what they earn, even though in the end only 2/3 is paid) to try to buy themselves out of bonded labor, pay off familial debts, or try to mount a dowry. Many are misinformed or decieved about the health consequences for them and the needs of the person who will receive their kidney. Once they have agreed to sell they are often pressured not to renege. They are often released too soon post-transplant compared to what is optimal for a transplant, and their self-reported health post-transplant is worse. Many experience significant social stigma as a "kidney man" (or woman)and the 20-inch scar (the more expensive way of doing the procedure would reduce the scar size) marks them for life and makes it difficult for them to marry. Most express significant regret and would advise others not to undertake the operation.

Despite these grave facts, as I argue in the paper (and in greater depth for many of these arguments in the chapter on transplant tourism in my new book on medical tourism under contract at Oxford University Press), many of the traditional justifications from the anti-commodification literature -- arguments relating to corruption, crowding out, coercion, and exploitation -- do not make a convincing case in favor of criminalization. If a ban is justified, I argue the strongest arguments are actually about defects in consent and justified paternalism, on the assumption that criminal prohibition is a second best regulation in the face of the impossibility of a more thoroughly regulated market.

I then examine what means might be used to try to crack down on the market if we concluded we should. I evaluate possibilities including extraterritorial criminalization, professional self-regulation, home country insurance reimbursement reform, international criminal law, and of course better organ retrieval in the patient's home country.

I will keep writing on this topic, including for my new book, so even though this paper is done feel free to email me your thoughts.

Posted by Ivan Cohen on April 24, 2013 at 11:03 AM in Article Spotlight, Criminal Law, Immigration, International Law, Science | Permalink | Comments (1) | TrackBack

Thursday, October 18, 2012

F-Words: Fairness and Freedom in Contract Law

I am participating in a online symposium on Concurring Opinions, where we are discussing Larry Cunningham's fantastic new book, Contracts in the Real World, and where you should check out the rest of the commentary.

As I read "Facing Limits," Larry's chapter on unenforceable bargains, I had to pause and smile at the following line:

People often think that fairness is a court's chief concern, but that is not always true in contract cases (p. 57).

I still remember the first time someone used the word "fair" in Douglas Baird's Contracts class. "Wait, wait," he cried, with an impish grin. "This is Contracts! We can't use 'the f-word' in here!"Of course, Larry also correctly recognizes the flip side of the coin. If courts are not adjudicating contracts disputes based on what is "fair," we might think that "all contracts are enforced as made," but as Larry points out, "that is not quite right, either" (p. 57).

Pedagogically, Contracts in the Real World is effective due to its pairings of contrasting casebook classics, juxtaposed against relevant modern disputes. In nearly every instance, Larry does an excellent job of matching pairs of cases that present both sides of the argument. I don't mean to damn with faint praise, because I love the project overall, but I feel like Larry may have missed the boat with one pairing of cases.

As I mentioned, the chapter on Facing Limits is in part about the difficulty of balancing fairness, or equitable intuitions, against freedom of parties to be bound by their agreements. Larry pairs In re Baby M, a case where the New Jersey's highest court invalidated a surrogacy agreement with Johnson v. Calvert, a case where the California Supreme Court upholds such an agreement. As I discuss after the break, I'm troubled that the Court in Baby M could be on the wrong side of both fairness and freedom. 

Facing Limits on Surrogacy Agreements

In re Baby M was arguably the first case on surrogacy agreements to reach national prominence. The court found unenforceable a surrogacy agreement between William and Elizabeth Stern, who hoped to raise a child that Elizabeth could not bear, and Mary Beth Whitehead, who wanted to give another couple "the gift of life" and agreed to bring William's child, Baby M, to term. Mrs. Whitehead and her then-husband Richard were in tight financial straits, and the surrogacy deal promised $10,000, "on surrender of custody of the child" to the Sterns.

Once she gave birth, Mrs. Whitehead found it difficult to part with the baby girl she called Sara Elizabeth, but the Sterns planned to name Melissa. To avoid relinquishing the child, the Whiteheads fled to Florida with the baby. When Baby M was returned to the Sterns and everyone made it to court, the trial judge determined that the interests of the baby were best served by granting custody to the Sterns. The Supreme Court of New Jersey agreed with that assessment, but on its way to that conclusion, rejected the validity of the surrogacy contract itself, in which all parties stipulated, prior to the birth of Baby M, that it was in the child's best interest to live with the Sterns.

Unenforceability

The Supreme Court's decision ostensibly turned on the unenforceability of the contract because, even in America, "there are, in a civilized society, some things that money cannot buy" (p. 55). But the decision is full of language suggesting that, in the Court's opinion, Mrs. Whitehead didn't know what she was doing. In the very paragraph that the Court assumed that she could consent to the contract, the Court marginalized her capacity to consent. 

The Court bought into two tropes often trotted out by those who aspire to protect the poor from themselves: the coercive effects of money, and the inability of the poor to fully understand the consequences of their decisions. The Court was troubled that Mrs. Whitehead, "[t]he natural mother," did not "receive the benefit of counseling and guidance to assist her in making a decision that may affect her for a lifetime." The Court was perhaps suspicious she could not. After noting the distressing state of her financial circumstances, the Court posited that "the monetary incentive to sell her child may, depending on her financial circumstances, make her decision less voluntary."

Fairness and Freedom

It strikes me as unfair to conclude that a mother of two is incapable of considering what it might mean to give birth to a third. Holding the surrogate to the bargain can seem unfair at the difficult moment where she hands over the baby, but I struggle to see how it is any less unfair to allow the parents to invest their hearts and energy into planning for a baby that will come, but will not become theirs. 

Turning to the question of the coercive effect of money, the problem with paternalistic protections is they often protect the neediest from the thing they ostensibly need the most. Many interested parties find ways to make money on adoption and surrogacy. It's puzzling, if we are truly serious about protecting the needy, that we would protect them from also acquiring some of the money that we seem to assume they so desparately need.

Here's another way to make the same point: in the wake of Baby M, some states allow surrogacy contracts, and some don't. Hopeful parents who can afford to enter into surrogacy contracts will go to states, like California, where those contracts are enforced. Surrogacy providers who hope to make their money as an intermediary will focus on markets where their contracts will survive judicial scrutiny. Our potential surrogates, however, are more likely to be tied to the jurisdictions in which they reside, at least if the assumptions about poverty in the Baby M opinion are generalizable. So altruistic surrogates will be able to carry a child to term in every state, but those who desire to make a bargain can do so only in those states willing to recognize them. To me, that sounds neither free nor fair. 

Larry takes some comfort in the common law inquiry into the best interests of the child, and with that I take no issue. In a case where the contract and the child's interests are at loggerheads, it seems appropriate in the abstract for the best interests to be a heavy thumb on the scale, or even to trump the prior agreement. I'm just not sure that In re Baby M -- a case where the Court knocked out the contract even though the contract terms and best interests were essentially in line -- is a case where the value of the best interest test are best brought to light.

Cross-posted at Concurring Opinions and ContractsProf Blog.

1 I may have slightly dramatized this exchange, although my classmates assure me I did not invent it from whole cloth.

 

Posted by Jake Linford on October 18, 2012 at 12:50 PM in Books, Current Affairs, Science, Things You Oughta Know if You Teach X | Permalink | Comments (8) | TrackBack

Friday, May 25, 2012

Using empirical methods to analyze the effectiveness of persuasive techniques

Slate Magazine has a story detailing the Obama campaign's embracement of empirical methods to assess the relative effectiveness of political advertisements. 

To those familiar with the campaign’s operations, such irregular efforts at paid communication are indicators of an experimental revolution underway at Obama’s Chicago headquarters. They reflect a commitment to using randomized trials, the result of a flowering partnership between Obama’s team and the Analyst Institute, a secret society of Democratic researchers committed to the practice, according to several people with knowledge of the arrangement. ...

The Obama campaign’s “experiment-informed programs”—known as EIP in the lefty tactical circles where they’ve become the vogue in recent years—are designed to track the impact of campaign messages as voters process them in the real world, instead of relying solely on artificial environments like focus groups and surveys. The method combines the two most exciting developments in electioneering practice over the last decade: the use of randomized, controlled experiments able to isolate cause and effect in political activity and the microtargeting statistical models that can calculate the probability a voter will hold a particular view based on hundreds of variables.

Curiously, this story comes on the heels of a New York Times op-ed questioning the utility and reliability of social science approaches to policy concerns and a movement in Congress to defund the political science studies program at NSF.

Jeff

 

Posted by Dingo_Pug on May 25, 2012 at 09:13 AM in Current Affairs, Information and Technology, Science | Permalink | Comments (1) | TrackBack

Sunday, February 05, 2012

This Is Not a Place of Honor

Here's a little memento mori for Superb Owl Sunday. It's from the Expert Judgment on Markers to Deter Inadvertent Human Intrusion into the Waste Isolation Pilot Project, an early-1990s report grappling with the problem of explaining to future generations who are likely not to share our language or culture why they should leave buried nuclear waste alone. This is the group's summary of the non-linguistic messages the site should convey to visitors, and it reads like a post-apocalyptic Lord's Prayer:

This place is a message ... and part of a system of messages ... pay attention to it!

Sending this message was important to us. We considered ourselves to be a powerful culture.

This is not a place of honor ... no highly esteemed deed is commemorated here...nothing valued is here.

What is here was dangerous and repulsive to us. This message is a warning about danger.

The danger is in a particular location ... it increases towards a center ... the center of danger is here .. of a particular size and shape, and below us.

The danger is still present, in your time, as it was in ours.

The danger is to the body, and it can kill.

The form of the danger is an emanation of energy.

The danger is unleashed only if you substantially disturb this place physically. This place is best shunned and left uninhabited.

Posted by James Grimmelmann on February 5, 2012 at 01:17 PM in Science | Permalink | Comments (0)

Monday, November 21, 2011

How Not To Secure the Net

In the wake of credible allegations of hacking of a water utility, including physical damage, attention has turned to software security weaknesses. One might think that we'd want independent experts - call them whistleblowers, busticati, or hackers - out there testing, and reporting, important software bugs. But it turns out that overblown cease-and-desist letters still rule the day for software companies. Fortunately, when software vendor Carrier IQ attempted to misstate IP law to silence security researcher Trevor Eckhart, the EFF took up his cause. But this brings to mind three problems.

First, unfortunately, EFF doesn't scale. We need a larger-scale effort to represent threatened researchers. I've been thinking about how we might accomplish this, and would invite comments on the topic.

Second, IP law's strict liability, significant penalties, and increasing criminalization can create significant chilling effects for valuable security research. This is why Oliver Day and I propose a shield against IP claims for researchers who follow the responsible disclosure model.

Finally, vendors really need to have their general counsel run these efforts past outside counsel who know IP. Carrier IQ's C&D reads like a high school student did some basic Wikipedia research on copyright law and then ran the resulting letter through Google Translate (English to Lawyer). If this is the aptitude that Carrier IQ brings to IP, they'd better not be counting on their IP portfolio for their market cap.

When IP law suppresses valuable research, it demonstrates, in Oliver's words, that lawyers have hacked East Coast Code in a way it was not designed for. Props to EFF for hacking back.

Cross-posted at Info/Law.

Posted by Derek Bambauer on November 21, 2011 at 09:33 PM in Corporate, Current Affairs, First Amendment, Information and Technology, Intellectual Property, Science, Web/Tech | Permalink | Comments (2) | TrackBack

Monday, October 17, 2011

The Myth of Cyberterror

UPI's article on cyberterrorism helpfully states the obvious: there's no such thing. This is in sharp contrast to the rhetoric in cybersecurity discussions, which highlights purported threats from terrorists to the power grid, the transportation system, and even the ability to play Space Invaders using the lights of skyscrapers. It's all quite entertaining, except for 2 problems: 1) perception frequently drives policy, and 2) all of these risks are chimerical. Yes, non-state actors are capable of defacing Web sites and even launching denial of service attacks, but that's a far cry from train bombings or shootings in hotels

The response from some quarters is that, while terrorists do not currently have the capability to execute devastating cyberattacks, they will at some point, and so we should act now. I find this unsatisfying. Law rarely imposes large current costs, such as changing how the Internet's core protocols run, to address remote risks of uncertain (but low) incidence and uncertain magnitude. In 2009, nearly 31,000 people died in highway car crashes, but we don't require people to drive tanks. (And, few people choose to do so, except for Hummer employees.)

Why, then, the continued focus on cyberterror? I think there are four reasons. First, terror is the policy issue of the moment: connecting to it both focuses people's attention and draws funding. Second, we're in an age of rapid and constant technological change, which always produces some level of associated fear. Few of us understand how BGP works, or why its lack of built-in authentication creates risk, and we are afraid of the unknown. Third, terror attacks are like shark attacks. We are afraid of dying in highly gory or horrific fashion, rather than basing our worries on actual incidence of harm (compare our fear of terrorists versus our fear of bad drivers, and then look at the underlying number of fatalities in each category). Lastly, cybersecurity is a battleground not merely for machines but for money. Federal agencies, defense contractors, and software companies all hold a stake in concentrating attention on cyber-risks and offering their wares as a means of remediating them.

So what should we do at this point? For cyberterror, the answer is "nothing," or at least nothing that we wouldn't do anyway. Preventing cyberattacks by terrorists, nation states, and spies all involve the same things, as I argue in Conundrum. But: this approach gets called "naive" with some regularity, so I'd be interested in your take...

Posted by Derek Bambauer on October 17, 2011 at 04:43 PM in Criminal Law, Current Affairs, Information and Technology, International Law, Law and Politics, Science, Web/Tech | Permalink | Comments (7) | TrackBack

Thursday, July 14, 2011

The Space Shuttle's Lying, Derelict Astronaut

Riding_rockets

As Atlantis is somewhere overhead tracing the last orbits of the Space Shuttle program, I'm thinking about my recent nightstand book, Riding Rockets by former astronaut Mike Mullane. In the autobiography, the three-time mission specialist reveals how the military and NASA tolerated a culture of chronic lying and fraud among its flyer corps. For example, here's how Mullane describes some of his blithe criminal conduct aimed at bolstering his chances in the astronaut-selection process.

In an act of incredible naïveté, the docs at NASA had asked us to hand-carry our medical records from our home bases. ... As the miles passed, I pulled out pages I felt could generate questions I didn't want to answer. In particular I pulled out references to the severe whiplash I had during an ejection from an F-111 fighter-bomber a year earlier. ... I liberated the offending pages from my files, planning to reinsert them on the return flight. I had one very slim chance of getting selected as an astronaut. I wasn't going to let a little thing like a felony get in the way. (p. 2)

I realize that the job of astronaut doesn't have the same need for a moral character requirement as that of lawyer. But it's such a coveted job, you'd think NASA could insist on a modicum of rectitude. After all, unmanned rockets can put up satellites. Half the reason to send real people up into orbit is to have heroes.

Mullane goes on to talk of how he lied, lied, and lied some more to an interviewing psychiatrist:

What would [the true stories of my childhood] have said about Mike Mullane? ... That I was an out of control risk taker? That I scorned rules? There was no way I was going to reveal that history. So I lied. (p. 23)

And did he turn out to be an out-of-control risk taker who was a liability to the astronaut corps? That's the conclusion I have to draw from the description of the re-entry on his second mission into space, aboard Atlantis for STS-27.

According to the checklist I should have been strapped into the mid-deck seat, but there was nothing to do or see down there, so I had asked [Commander Robert "Hoot" Gibson] if I could hang out on the flight deck and shoot some video of the early part of reentry. I would get into my seat before the Gs got too high. (p. 285)

But he didn't keep his deal with the commander:

The clouds appeared to skim by at science-fiction speeds. The sight was a narcotic and I watched it until my zero-G weakened legs couldn't take my weight any longer and I collapsed to the floor. It was beyond time to get to my seat. I pulled myself to the port-side interdeck-access opening and looked down. Uh-oh. I had waited too long. ... I was stuck on the flight deck, its steel floor now my seat, a situation I didn't altogether regret. (p. 287)

Mullane rode the shuttle back to Earth like this, sitting on the floor of the upper deck and unable to stand up, even though he was the person designated in an emergency to operate the lower-deck escape hatch and deploy the slide pole if the crew needed to bail out. Nice, huh? He exposed the whole crew to elevated risk because he wanted to be able to look out the windows.

I might of thought this kind of nonsense would get Gibson and Mullane into trouble at NASA. It sounds like dereliction of duty to me. And you and I both know that doing the equivalent as a passenger on an airliner would get you arrested by the sky marshal and facing jail time. But apparently there were no repercussions for the astronauts. Mullane flew again into space aboard Atlantis and eventually retired to become a motivational speaker. And Hoot Gibson went on to two more shuttle flights and a post-NASA career as a Southwest Airlines pilot. At Southwest, he presumably insisted that all passengers, including those in the exit-row, actually sit in their seats during landing.

All in all, I appreciate what Mullane has done for the historical record by writing his candid book. But reading about Mullane's dubious service has tempered my sadness about the end of the Space Shuttle program. What's more, you know that Mullane is nowhere near to being on the leading edge of deviance in the astronaut corps. Obviously, you'll remember astronaut and convicted felon Lisa Nowak, who drove all night from Houston to Orlando to try to kidnap her romantic rival for the affection of philandering NASA astronaut William Oefelein.

At the end of the day, I am happy to give an increased role to adorable robots that look like Johnny 5 from Short Circuit.

6a00e54fd2d5af8834014e89d770d2970d-500wi

This rover would never tamper with its medical records, and it looks like Ally Sheedy's friend. (Image: NASA/JPL)

Posted by Eric E. Johnson on July 14, 2011 at 05:51 PM in Books, Current Affairs, Science | Permalink | Comments (3) | TrackBack

Tuesday, July 12, 2011

Dwarf Planets and the Law

2006_planet_definition_vote
Astronmers, doing astronomy.
(Photo: IAU / Lars Holm Nielsen)

The practice of astronomy and the practice of law are virtually the same, except that what lawyers do is a lot less silly.

Well, that's the only conclusion you can draw from reflecting upon the work of the International Astronomical Union (IAU). This summer marks the fifth anniversary of the IAU's demotion of Pluto to the status of "dwarf planet." The final determination of that matter was made on August 24, 2006 under the Resolutions B5 and B6 of the 26th General Assembly. What amazes me about it is not the outcome of the vote, but the fact that there was a vote at all.

I find it astounding that, as a profession, astronomers have submitted themselves to the jurisdiction of what is essentially a make-believe government. What's really crazy is that not only do astronomers recognize some sort of sovereign authority in the IAU, so do actual lawmakers.

The legislature of New Mexico passed a resolution decreeing that "as Pluto passes overhead through New Mexico's excellent night skies, it be declared a planet."

Are you kidding me? Think about what the New Mexico legislature is saying here: (1) The IAU can legislate scientific fact. (2) The New Mexico legislature can overrule the IAU as to matters of scientific fact. (3) But (incredibly!) only when Pluto is in New Mexico's jurisdiction!!!!!!!!!

Holy freaking cow. Okay, well, I've got to jump off the blog now. I need to finish my grant application. I'm working on a space probe that will beam back to Earth the clearest pictures we've ever had of the rule against perpetuities.

Posted by Eric E. Johnson on July 12, 2011 at 04:36 PM in Science | Permalink | Comments (3) | TrackBack

Sunday, June 26, 2011

"In Defense of Judicial Elections" - author Q&A

In their book “In Defense of Judicial Elections” authors Melinda Gann Hall and Chris Bonneau do just that – they provide a defense of judicial elections. Their work has been somewhat controversial and so I decided to spice up our Prawfs summer by conducting a very brief “E-Interview” with them on the subject. My understanding is that they are generally willing to engage in some ‘give and take’ in the comments section of the blog. This does not necessarily mean that they will answer every question – it’s their call.

JY - Judicial elections have gotten a lot of media attention in recent years and a number of groups and even former SCOTUS justice Sandra Day O'Connor have voiced their opposition to them. In your book "In Defense of Judicial Elections" you obviously takes a different view - please elaborate.

CB - I think the main difference is that our research and analysis begins from a place of agnosticism and we only make conclusions based on the empirical data.  Moreover, our position is subject to being revised in the future if the evidence warrants.  The vast majority of the opponents of judicial elections are not interested in how they actually work.  They aren't interested in empirically verifying their claims.  And, when people dare to question their assumptions (whether it be us or Jim Gibson or Matt Streb or Eric Posner or anyone else), they simply ignore the evidence and shift their argument.  

MGH: The most significant difference between our book and much of the advocacy taking place on this topic is that we rely on empirics rather than outdated normative theories or unsubstantiated assumptions. Elections certainly have limitations but of the case against them rests on hyperbolic rhetoric or unverified hypotheses.

JY - Aren't you concerned that some of the less desirable aspects of political elections will influence judicial decision making? Won't powerful interests cast undue influence on case outcomes, given that they might have helped finance a judge’s reelection or might do so in the future?

MGH - Recusal standards and disclosure requirements will go a long way toward remedying this problem. However, there is no reason to expect a quid pro quo relationship between donors and judges. Money tends to support candidates who share a group's interests. There is no evidence at all that judges are "bought. We also should acknowledge that there is no way to remove politics from the judicial selection process. Appointment systems, including the “merit” plan, have their own shortcomings.

CB - No more so than some of the less desirable aspects of appointments will influence such decisionmaking.  This is a point we have made numerous times, but bears repeating:  there is simply no evidence--NONE--of justice being for sale.  Moreover, do we really think that "powerful interests" don't have undue influence on case outcomes as, say, the US Supreme Court?  Of course they do.  At least with elections, voters have a choice and can oust rogue judges.  

JY - In recent decades it has become quite clear that judicial elections can be ugly affairs with lots of negative campaigning - doesn't this hurt the judiciary's image - making people see them less as esteemed decision makers and  more as politicians in robes?

MGH - Judges are politicians in robes in some sense, and voters are smart enough to recognize this. Judges have a great deal of discretion,  and their values influence what they do. Also keep in mind that state supreme court elections have been heated for decades, with defeat rates that surpass many other elected offices. If competitive elections, or elections at all, harm judicial legitimacy, there would be obvious evidence of this by now. 

CB: This is a great question and it is a legitimate concern.  However, in a series of survey experiments--in KY as well as nationwide--Jim Gibson has found that negative ads and candidates talking about policy have no consequences for legitimacy.  He did find a negative effect for campaign contributions, finding that campaign contributions do lead to a loss of legitimacy (this is also true for state legislatures).  But, and this is a crucial point, the net effects of elections is still positive. That is, even with the costs incurred by campaign contributions, judicial elections are legitimacy-ENHANCING institutions.  This is a really important finding and undermines the arguments of folks like Justice O'Connor and Justice at Stake. 

 

Posted by Jeff Yates on June 26, 2011 at 08:23 PM in Books, Current Affairs, Judicial Process, Law and Politics, Science | Permalink | Comments (2) | TrackBack

Wednesday, June 22, 2011

Feedback loops - applications?

A recent Wired article "Harnessing the Power of Feedback Loops" tells the story of how such mechanisms can be used in a variety of ways to affect human behavior - to essentially get us to 'do the right thing'. Here's an explanation of how they work from the article:

A feedback loop involves four distinct stages. First comes the data: A behavior must be measured, captured, and stored. This is the evidence stage. Second, the information must be relayed to the individual, not in the raw-data form in which it was captured but in a context that makes it emotionally resonant. This is the relevance stage. But even compelling information is useless if we don’t know what to make of it, so we need a third stage: consequence. The information must illuminate one or more paths ahead. And finally, the fourth stage: action. There must be a clear moment when the individual can recalibrate a behavior, make a choice, and act. Then that action is measured, and the feedback loop can run once more, every action stimulating new behaviors that inch us closer to our goals.

A number of examples are provided, the most prominent being feedback loop signs that tell you how fast you're driving next to the posted speed limit. This reminds me of theories of athletic coaching that I've read about - how good coaches use low-key constant correction advice to get their players to change their performance in real time (or close to it). Apparently, now is the time for feedback loop devices as a public policy method, as the costs of one of the primary means of providing feedback loops - sensor technology - continues to sink.

While not all feedback loop applications require sensors, the rise of such technology should perhaps give us pause to consider how such mechanisms might be used in a wide number of settings. For instance, can it be used effectively in teaching (perhaps, not too different from coaching)? I occasinally use real time quizzes via powerpoint, but I never really thought of it as a feedback loop although I imgaine that there are similarities.

But, what about legal applications? Can we use it for more than just speeding? Will such mechanisms make us more likely to obey the law? Why do they work in the first place? Well, here's what the article said on that point:

So feedback loops work. Why? Why does putting our own data in front of us somehow compel us to act? In part, it’s that feedback taps into something core to the human experience, even to our biological origins. Like any organism, humans are self-regulating creatures, with a multitude of systems working to achieve homeostasis. Evolution itself, after all, is a feedback loop, albeit one so elongated as to be imperceptible by an individual. Feedback loops are how we learn, whether we call it trial and error or course correction. In so many areas of life, we succeed when we have some sense of where we stand and some evaluation of our progress. Indeed, we tend to crave this sort of information; it’s something we viscerally want to know, good or bad. As Stanford’s Bandura put it, “People are proactive, aspiring organisms.” Feedback taps into those aspirations.

With all of this in mind, I invite readers to suggest potential applications :-)

[H/T Tim Ferriss]

Posted by Jeff Yates on June 22, 2011 at 08:43 AM in Article Spotlight, Criminal Law, Culture, Law and Politics, Science, Sports, Teaching Law, Web/Tech | Permalink | Comments (1) | TrackBack

Thursday, June 16, 2011

Coming soon to a theatre near you ...

"Moneyball" the movie. The moneyball concept gets a lot of play in the realm of academic hiring and performance analysis. Of course, that get's no play in this movie - but if Brad Pitt plays moneyball general manager Billy Beane, then who is Billy Beane in law and what actor plays him in Moneylaw the  movie?

 

Posted by Jeff Yates on June 16, 2011 at 09:27 PM in Books, Culture, Film, Games, Life of Law Schools, Science, Sports | Permalink | Comments (3) | TrackBack

Wednesday, June 08, 2011

Fear and culture

My first purpose in this post is to direct readers to a fascinating research endeavor headed by Yale law professor Dan Kahan and George Washington University law school professor Donald Braman - the Cultural Cognition Project. Here is a brief description of the project from its website:

The Cultural Cognition Project is a group of scholars interested in studying how cultural values shape public risk perceptions and related policy beliefs. Cultural cognition refers to the tendency of individuals to conform their beliefs about disputed matters of fact (e.g., whether global warming is a serious threat; whether the death penalty deters murder; whether gun control makes society more safe or less) to values that define their cultural identities. Project members are using the methods of various disciplines -- including social psychology, anthropology, communications, and political science -- to chart the impact of this phenomenon and to identify the mechanisms through which it operates. The Project also has an explicit normative objective: to identify processes of democratic decisionmaking by which society can resolve culturally grounded differences in belief in a manner that is both congenial to persons of diverse cultural outlooks and consistent with sound public policymaking.

I've been doing some reading in recent months on the topics of fear and risk and find the topic very compelling, especially with regard to how it plays out in our day to day lives (sometimes on rather mundane matters). My second purpose in this post is to pose to you, dear readers, a quick question: Can you think of any fears that could be described as distinct to a country you are familiar with - this doesn't mean that it only occurs in a given country, but rather that it is much more prevalent or pronounced in that country. Alternatively we might think in terms of regions within the United States. Sunstein offers the example of European nations taking a much more precautionary approach to genetically modified food than the United States. I'm thinking more along the lines of individual fears - are there things that you have seen people fear greatly in this country that are largely ignored in others? Or vice versa?

Posted by Jeff Yates on June 8, 2011 at 01:27 PM in Culture, Science, Travel | Permalink | Comments (4) | TrackBack

Tuesday, June 07, 2011

Is deliberation overrated?

I'm not saying that deliberation is necessarily overrated, but I'm starting to wonder about its relative value. In recent years I've read a number of books and articles on the decision making processes of groups such as James Surowieki's The Wisdom of Crowds (2005) and Cass Sunstein's Infotopia: How Many Minds Produce Knowledge (2008), and found them to be very interesting and insightful. Both of these books at least suggest the possibility that group decision making may not always be better with group deliberation.

Of course, to suggest that something is 'overrated' typically implies that it is somewhat highly rated in the first place. When I look around, I see deliberation everywhere - government decisions, academic committee decsisions, tenure decisions, where to eat lunch, jury outcomes, Supreme Court outcomes (ok, only to a degree on that one). I think it's fair to say that deliberation is cherished in this country. But is it all that it's cracked up to be? What are its attributes? How do we evaluate its worth (relative to other systems)?

For a bit of class fun last semester, I tried a class exercise that was suggested by one of my readings on this subject.

I divided the class into three groups of equal size: 1) The deliberation group, 2) The secret vote group, and 3) the list vote group. I then held up for the class to see (all had roughly equal views) a glass container of paper clips. They were able to view the container for 30 seconds. I then asked the groups to decide how many paper clips were in the container. The secret ballot group was to do just that - each person would make a guess, write it down in private and their estimates would be averaged. The list  group would use a list - the first person to decide would write their estimate on the top of the list and then the estimates would go from there (everyone could see the prior estimates)- and they were averaged. The deliberation group deliberated on the best estimate and used a consensus decision rule on the number of paper clips.

The results? The best estimate was by the secret vote group, followed by the list group, and the worst estimate (by far) was by the deliberation group. Of course, this little exercise is hardly ready for scientific peer review and was done primarily for fun and to introduce the class to varying decision methods. However, given the prevalence of deliberation in our society, might it give us pause to think about whether it's 'overrated'? I'm not sure. Certainly there are other considerations at issue (e.g. how the process makes participants feel). But I thought I'd see what Prawfs readers thought.

Posted by Jeff Yates on June 7, 2011 at 11:58 AM in Criminal Law, Deliberation and voices, Games, Judicial Process, Law and Politics, Legal Theory, Life of Law Schools, Science, Teaching Law | Permalink | Comments (3) | TrackBack

Wednesday, July 21, 2010

Direct to Consumer Genetic Testing - The Need for Early Filtering of Genetic Information

Genetic testing for adult onset diseases used to be mainly a medical service. In most cases a person who had a certain genetic disease that was prevalent in her family would go to test fo see if she carries the genetic mutation. For example, a woman who had several cases of breast cancer in her family would test for the breast cancer genetic mutation BRCA1/BRCA2 to see is she carries the mutation and has a high probability of getting the disease. But, the proliferation of direct to consumer genetic testing changes the nature of the service to a consumer service. Companies like 23andme and Pathway Genomics (who was planning to start selling its kits in Walgreens) offer consumers the option to buy packages of tests (ranging from 25 to over a 100 conditions). Consumers often buy the tests to satisfy their curiosity or they may even receive them as a gift. People purchasing the testing packages usually do not consult a medical professional when deciding to undergo the tests and receives the results alone by accessing a website.

Yesterday I spoke before the FDA, which is considering regulating direct to consumer genetic testing. My presentation was based on a symposium piece I am working on. I argued for the need for a medical professional to guide people throughout the process and advise them not just on the interpretation of the results but also earlier in the process to determine what genetic information they actually want to have.

Interpreting the results of genetic tests is not easy. Unlike other over the counter tests, like a pregnancy test, which gives a clear positive or negative result, genetic tests are about probabilities. Even a person who tests positive for a certain mutation may still not get sick depending on other non-genetic factors. People have a hard time understanding the results of genetic tests and for that reason there have been many calls to require the guidance of a medical professional for the delivery of the results.

But I believe focusing on the interpretation of the results is only half the issue. It is important to have professional guidance also at the outset to determine what tests to undergo. A medical professional should guide individuals and tailor the panel of tests to the individual who desires to test. Why is that? Well, first of all, some people, if they get a chance to give it some thought, may not want to know all their genetic information. For example, a person may prefer not to know that he is likely to get Alzheimer's at a young age. Secondly, not all genetic information is made equal. Some genetic tests do not convey that much useful information. For example, a positive result in some tests may only demonstrate a slightly higher likelihood of getting the disease than the probability in the general population. Eliminating such tests at the outset will facilitate the interpretation of the results. It would be possible to focus on the truly important positive results at the end of the process.

To achieve all this it is important for the law to require the guidance of a medical professional who is not a representative of the genetic testing company. A medical professional working for the genetic testing company may have good knowledge of the tests, but could have an interest in having the consumer purchase as many tests as possible. This could place him in a conflict of interest with the consumer who could be best off by purchasing a more limited panel of tests tailored specifically for him.

Posted by Gaia Bernstein on July 21, 2010 at 01:02 PM in Information and Technology, Science | Permalink | Comments (12) | TrackBack

Thursday, July 15, 2010

Why Giving Up on Sperm and Egg Donor Anonymity May not be as Good as it Sounds?

751197_labUse of sperm and egg donations is common practice in the United States. Couples or individuals who have trouble conceiving or lack a partner resort to use of donor eggs or sperm. The donors are usually young, often college or medical students. Many donate for financial reasons and some for altruistic reasons. But the one thing many of them share is the lack of a desire to form future ties with the born offspring. Thus, unsurprisingly, a strong norm of anonymity has prevailed. Most children born of sperm or egg donations are not told they are conceived through such a donation. And even more importantly the donors are guaranteed anonymity and thereby protection from future contact from the conceived offspring.

But all of this is changing now. Eleven jurisdictions worldwide including: Sweden, Austria, Switzerland, the Netherlands, Norway, the United Kingdom, New Zealand, Finland and three Australian states have prohibited donor anonymity. In these jurisdictions a child can find out who is the person who donated the egg or sperm that lead to his conception. In the United States the anonymity norm still prevails. Yet, important voices, including Professor Naomi Cahn, in her new book: Test Tube Families, call for adopting the prohibition on anonymity in the United States. The main argument for prohibiting anonymity is the conceived children's need for the genetic information in order to develop their identities.

Advocates of prohibiting donor anonymity realize that prohibiting donor anonymity can deplete sperm and egg resources as fewer individuals will be willing to donate. But, they argue that the effects are only short-term and in the long-term donations are unaffected. I decided to dig deeper into the empirical data in three representative jurisdictions in which anonymity was prohibited: Sweden, Victoria (an Australian state) and the United Kingdom. The overall picture emerging from the empirical data reported in my study revealed a disconcerting scenario of dire shortages in sperm and egg supplies accompanied by long wait-lists. Faced with acute shortages the fertility industry in these jurisdictions tried to recruit older donors who tend to be less inhibited by the disclosure requirement. Yet, this strategy did not fill the gap for sperm. As for eggs it could not be effective because eggs donated by younger women are more  likely to result in a successful pregnancy.  As a result, individuals and couples pained by infertility are increasingly engaging in fertility tourism to countries in which anonymity is not prohibited.  I believe that the picture emerging from these three jurisdictions points to the need for great caution in adopting a prohibition on donor anonymity in the United States.

Posted by Gaia Bernstein on July 15, 2010 at 01:33 PM in Information and Technology, Science | Permalink | Comments (9) | TrackBack

Wednesday, May 05, 2010

Leadership, Judgment, and Reduction at the Harvard Business School

This morning's Wall Street Journal reports that the Harvard Business School has named a current professor, Nitin Nohria as its 10th dean.  The article describes Nohria as "a vocal critic of management education and the leaders it produces," and quotes Nohria's recent conference call comment - one that without some unpeeling sounds a little odd:  "I believe that management education has been overly-focused on the principles of management."  But maybe not.  Would it sound so odd to say that legal education has been overly-focused on the principles of law?  Not at all.  But consider the next quote in the article, this from Jeffrey Sonnenfeld at the Yale School of Management:  "Mr. Nohria is someone who's been asking the tough questions. . . .  While there is a lot of soul searching going on, he has been taking the steps to give MBAs judgment as well as knowledge."

To paraphrase Mark Twain, everybody talks about judgment but nobody does anything about it.  It's hard to be both thorough and brief when giving quotable comments to reporters, so I don't knock Dean Sonnenfeld at all, but, obviously, instilling or teaching or demonstrating judgment is a task far more challenging than the mere "giving" of it.

What's intriguing to me is the implicit polarity of (a) over-focus on principles of management and (b) judgment.  Some thoughts on principles (or rules) as reduction, something I've been considering recently, below the break.

I can't recall if I have blogged about this, but I have mentioned it in a couple papers.  My next door neighbor and very good friend, David Haig, is a theorist in evolutionary biology at Harvard.  He is a very smart guy.  His groundbreaking work was in genomic imprinting, and particularly the theory underlying certain outcomes in maternal-fetal conflicts (it's a game theoretic approach that hinges on the selection of the mother's or father's gene, particularly when it's a zero-sum game as to resources as between the mother and the fetus).  The theory has practical impact because it helps explain conditions like preeclampsia in pregnancy.  Every couple weeks, usually late on a weekend afternoon, I open the gate in the fence between the houses, walk up the steps, and David and I share a bottle of wine and solve all the problems of the world (usually his house because he has small children.)  There is one continuing theme:  David thinks science will get us most, if not all, of the answers (eventually), and I am what we have come to refer to as a mysterian (Colin McGinn may have coined it, but I am happy to adopt it.)

Much of our conversation, then, is expressly or implicitly about reduction, and more specifically, epistemic reduction. Reduction is of particular interesting to biologists (or philosophers of science interested in biology) because of the seeming loss of explanatory power as one moves from molecular biology to the physics of the atoms and particles that make up the cells. Hence there is a Stanford Encyclopedia of Philosophy entry on Reductionism in Biology and it defines epistemic reductionism as "the idea that the knowledge about one scientific domain (typically about higher level processes) can be reduced to another body of scientific knowledge (typically concerning a lower and more fundamental level)."  The Theory of Everything orientation to the world is necessarily reductive, even if not deterministic (hence the hope of somebody like Roger Penrose who hopes that consciousness and free will are ultimately explained by quantum physics, and therefore as reducible or irreducible as our ability to understand particles). 

I just saw a paper on BEPress from Hanoch Dagan and Roy Kreitner with a pithy quote about legal doctrine: "Langdellian legal science envisioned law as an autonomous discipline governed by three characteristic intellectual moves: classification, induction, and deduction."  This strikes me as classically reductive in the sense of isolating from the data what constitutes in language, the medium of the law, any particular element of any particular legal consequence, whether it be, for example, duty, negligence, mens rea, an investment contract, monopolization, or apparent authority.   Focusing on the Kantian idea of judgment as the "faculty of subsuming under rules, i.e., of determining whether something stands under a given rule ... or not,” there's really no difference between scientific judgments in biology and scientific judgments in legal doctrine.  The real question is whether in adopting a particular rule or theory or model (the skill in judgment that Kant didn't view as something that could be "given" by way of teaching, but could only be practiced) we've adopted one with optimum power for explanatory or predictive purposes in resolving the question at hand.

Hence, when we acquire knowledge, we are necessarily selecting rules that themselves tag only pieces of all the data available to us, and those rules allow us to predict consequences whenever the relevant conditions present themselves.   Right now (but that could change), the selection of particle physics as the model (with its coherent assemblage of rules) is not going to help a biologist explain pollination, notwithstanding all of the explanatory power of particle physics for those operating the Hadron collider.  It strikes me that Nohria is right about business people (and I extend it to lawyers in business):  if we over-focus on the rules and principles of management or law, we've over-reduced, and will necessarily find our judgments to be problematic.

Posted by Jeff Lipshaw on May 5, 2010 at 01:03 PM in Legal Theory, Lipshaw, Science, Teaching Law | Permalink | Comments (0) | TrackBack

Saturday, January 02, 2010

Data collection, the pursuit of knowledge, and intellectual property rights

First, I'd like to thank Dan and the rest of the Prawfs gang for inviting me to guest blog here - it is truly an honor. My posts are usually relegated to a blog with a much smaller following that I run with co-editor Andy Whitford. As Dan noted I am a professor of political science at Binghamton University. However, before I went to graduate school and began my second career as a social scientist, I was an attorney. It is this intersection of law and social science that has always intrigued me and my question for this post has a lot to do with both topics. 

In conducting social science research I perform a lot, and I mean a lot, of data collection and coding. This is a process that is annoyingly both mundane and challenging (at least at times). Since most of you have at least some experience with this process I will not bore you with the challenges and pitfalls of performing quality data collection and coding. What I do want to stress though is that it is work. It requires time, expertise (this varies of course by context), and effort (i.e. it's not fun). Finally, this work adds value. 

In political science there are very strong professional mores to share data with other researchers. In fact, it is usually expected immediately after publication of your first article using the data if not before that time (e.g. after presenting a working paper at a conference). I profess some ignorance of the social mores on data sharing in empirical legal studies, but from my few conversations on this point, I think that they might be somewhat different. In political science this norm of sharing your data is usually rationalized along the lines of "don't you want to aid the pursuit of knowledge?" or "surely you support the advancement of science, right?" or similar call to a higher good. Now, don't get me wrong, I have asked people for data and they have shared it with me (and vice versa) - this process is all fine and good. But isn't this generalized rationalization a bit simplistic? If people begin losing incentives to spend significant time collecting data, then doesn't that inhibit the pursuit of knowledge? It seems that within this norm there is a tacit winner (data analyzers) and an implicit, well, loser-chump (data collectors). Isn't a more nuanced discussion in order? Can't we find some way to satisfactorily compensate/protect data collection efforts? I'd be very interested to hear what intellectual property scholars think about this situation. Okay, a few observations to clarify my discussion here:

- Of course, I am not talking about data collected with the help or aid of a funding entity such as the National Science Foundation - clearly such data should be shared and my understanding is that it is part of the grant agreement. 

- I am also not talking about a situation in which a researcher has access to information that other researchers do not have. The usual situation is publicly available data that has been collected, organized, and put into a spreadsheet - all with some toil and sweat. 

- I am not suggesting that data be kept forever - just that we might think about a protection period or establishing guidelines that adequately compensates the investment of time that data collection warrants and perhaps that we have some uniformity in the process and an enforcement mechanism.

- Re the duty to further the pursuit of knowledge - aren't there competing duties to the entities providing the opportunity to collect the data. I'm wondering if your university pays you summer money to collect data (or for a research assistant to do so), then doesn't the institution have a proprietary interest in that data and shouldn't it be compensated when the effort it funded is used? Isn't this what happens when science professors get patents on things they discover or develop on university funded projects? It is my understanding that the entities producing Pacer and Trac data charge users for their products.

- One defense of requiring "quick" data sharing is that we must do this to make sure that the researcher has competently collected and analyzed the data. Really? I'm pretty sure that a journal editor could require that authors submit data for peer review with an agreement that the person performing the robustness/accuracy testing does not publish with the data or release it - simple enough. 

- Another defense is that data collectors are adequately compensated by the social capital and/or citations (acclaim) that they receive by making their data available to other researchers - curiously, we don't apply this rationale (at least to the same degree) to music, writing, trademarks, or product design.

I'll shut up for now, but I would be interested in hearing other peoples' thoughts on these matters. As I said before, I've been on both sides of the data sharing situation, so I actually have somewhat mixed feelings on the subject. It just seems to me that this is a very under explored area of IP, especially given the increase in the importance of data driven processes in recent years in the private sector. A (very) brief search on Lexis revealed some law review articles on data and IP, but not as many as I had expected. 

Jeff

Posted by Lawandcourts.wordpress.com on January 2, 2010 at 02:50 PM in Blogging, Law and Politics, Peer-Reviewed Journals, Science | Permalink | Comments (6) | TrackBack

Thursday, December 10, 2009

Plus, All the Pretty Colors!

I'm sure this is common knowledge for people who are interested in communications law, but for me, this was a useful and straightforward explanation of the science behind bandwidth--how it works and why it's limited--and some of the issues that face the FCC as it allocates the spectrum.

Spectrum

Image: U.S. Frequency Allocation Chart, October 2003, National Telecommunications and Information Administration, available at http://www.ntia.doc.gov/osmhome/allochrt.pdf (click image above)

Posted by Sarah Lawsky on December 10, 2009 at 11:13 AM in Science | Permalink | Comments (1) | TrackBack

Monday, August 24, 2009

The Meaning of Y

Recently, I’ve been thinking quite a bit about Caster Semenya, the 18-year old world-champion runner from South Africa.   In response to concerns that Semenya is too fast and that her voice is too deep and that her build is too masculine, track and field’s governing body has arranged tests to ascertain her sex.  One Italian runner, Elisa Cusma, complained: “These kind of people should not run with us. For me, she’s not a woman. She’s a man.”  The concern is not that Semenya set out to fool the governing body.  The concern is that Semenya, who grew up as female, may in fact have sufficient male characteristics to be categorized as male.  As someone who writes about gender and race as social constructs and imperfect proxies, my fascination with the Semenya case goes beyond prurient curiosity.  For me, the controversy brings to mind the racial prerequisite cases from the early 1900s that Ian Haney Lopez has written about, such as United States v. Thind and Ozawa v. United States.  It also brings to mind Ariela Gross’s work on litigating whiteness.  Bust mostly, the controversy speaks volumes about the meaning of sex and gender.

            I have no idea what the outcome will be.  A gynecologist, an endocrinologist, a psychologist, an internal medicine specialist, and an expert on gender have all been asked to examine Semenya and weigh in on the issue.  Indeed, I can easily imagine a situation in which some tests will suggest Semenya is female, while other tests will suggest she is male.  Indeed, I can easily imagine a wringing of hands, a determination that Semenya is “different” and thus ineligible to compete “as a woman” or “as a man.”  For me, the real issue is not whether Semenya is male or female, but rather our compulsive need to understand sex and gender in binary terms, even when such binary thinking excludes significant segments on the population.  It is similar to the way we need to know whether someone is black or white, straight or gay, liberal or conservative, guilty or innocent, when the reality is often far more complicated.

A NY Times article speculates that whatever the outcome, 18-year old Semenya’s life will be forever changed.  And all of this makes me ask “what if?”  Since the election of President Obama, there has been talk, however premature, of living in a post-racial world.  Clearly a post-racial world seems something we should aspire to.  But how about a post-gender world?  Should that also be on the agenda?  What might it be like to live in a world in which the first question we ask when someone is pregnant is not “boy or girl”?  What might census data collection or Title VII or Title IX or marriage equality look like in such a world?  What might it be like to live in a world without “urinary segregation,” to borrow from Lacan?  Would it be possible to live in such a world?  Would we want to?  And can the law get us there?

Posted by Bennett Capers on August 24, 2009 at 07:32 AM in Gender, Science, Sports | Permalink | Comments (5) | TrackBack

Thursday, April 02, 2009

What is the Future of Empirical Legal Scholarship?

First, thanks to Dan and everyone else at Prawsblog for inviting me to post here this month. I'm really looking forward to it.

I found Jonathan Simon's recent posts about the future of empirical legal scholarship quite interesting. On the one hand, as someone in the ELS field, I found his general optimism about its future uplifting. But I'm not sure I share it, although my concern is more internal to the field itself than external. I'm going to be writing about this a lot this month, so I thought I'd use my first post to just lay out my basic concerns.

Jonathan focuses on trends outside of ELS: cultural and attitudinal shifts within the law as a whole, and more global changes in, say, economic conditions. And at one level I think he is right--the decreased cost of empirical work, the PhD-ification of the law, the general quantitative/actuarial turn we have witnessed over the past few decades all suggest ELS is here to stay. But there are deep internal problems not just with ELS, but with the empirical social sciences more generally, that threaten their future. I will just touch on the major points here, and I will return to all of these issues in the days ahead.
To appreciate the problem, it is first necessary to note a major technological revolution that has taken place during the past three decades. The rise of the computer cannot be overstated. Empirical work that I can do in ten minutes sitting at my desk would have been breathtakingly hard to do twenty-five years ago and literally impossible fifty years ago. The advances have been both in terms of hardware (computing power and storage) and software (user-friendly statistics packages). The result is that anyone can do empirical work today. This is not necessarily a good thing.

So what are the problems we face?

1. An explosion in empirical work. More empirical work is, at some level, a good thing: how can we decide what the best policy is without data? But the explosion in output has been matched by an explosion in the variation in quality (partly because user-friendly software allows people with little training to design empirical projects). The good work has never been better, and the bad work has never been worse. It could very well be that average quality has declined. Some of the bad work comes from honest errors, but some of it comes from cynical manipulation.

2. A bad philosophy of science. Social scientists cling to the idea that we are following in the footsteps of Sir Karl Popper, proposing hypotheses and then rejecting them. We are not. We never have. This is clear in any empirical paper: once the analyst calculates the point estimate, he draws implications from it ("My coefficient of 0.4 implies that a 1% increase in rainfall in Mongolia leads to a 0.4% increase in drug arrests in Brooklyn"). This is not falsification, which only allows him to say "I have rejected 0%." Social science theory cannot produce the type of precise numerical hypothesis that falsification demands. We are trying to estimate an effect, which is inductive.

3. Limited tools for dealing with induction. Induction requires an overview of an entire empirical literature. Fields like medicine and epidemiology have started to develop rigorous methods for drawing these types of inferences. As far as I can tell, there has been no work in this direction in the social sciences, including ELS, of any sort. This is partly the result of Problem 2: such overviews would be unnecessary were we actually in a falsificationist world, since all it takes is one black swan to refute the hypothesis that all swans are white.

As a result of these three problems, we produce empirical knowledge quite poorly. To reuse a joke I've made before and will likely make again at least a dozen times this month, Newton's Third Law roughly holds: for every empirical finding there is an opposite (though not necessarily equal) finding. 

With more and more studies coming down the pike, and with little to no work being done to figure out how to separate the wheat from the chaff, ELS could defeat itself. If it is possible to find any result in the literature and no way to separate out what is credible and what is not, empirical research becomes effectively useless. (This only exacerbates the problem identified by Don Braman, Dan Kahan and others that people choose the results that align with their prior beliefs rather than adjusting these beliefs in light of new data.)

So what is the solution? Empirical research in the social sciences needs to adopt a more evidence-based approach. We need to develop a clear definition of what constitutes "good" and "bad" methodological design, and we have to create objective guidelines to make these assessments. We have to abolish string cites, especially of the "on the one hand, on the other hand" type, and replace them with rigorous systematic reviews of the literature.

Of course, these guidelines and reviews are challenging to develop for the methodologically straight-forward randomized clinical trial that medicine relies on. In the social sciences, which are often forced to use observational data, the challenge will be all the greater. But, as I'll argue later this month, the rewards will be all the greater as well.

The use of systematic reviews is particularly important in the law, for at least reasons:

1. Inadequate screening. Peer review is no panacea by any means, but it provides a good front line of defense against bad empirical work. We lack that protection in the law. There are some peer reviewed journals, but not many. And the form of peer review that Matt Bodie talked about for law reviews recently isn't enough. The risk of bad work slipping through is great.

The diversity of law school faculties, usually a strength, is here a potential problem. Even theoretical economists have several years of statistics, so everyone who reads an economics journal has the tools to identify a wide range of errors. But many members of law school faculties have little to no statistical training, making it harder for them to know which studies to dismiss as flawed.

2. Growing importance of empirical evidence. Courts rely on empirical evidence more and more. And while disgust with how complex scientific evidence is used in the courtroom has been with us since the 1700s if not earlier, the problem is only going to grow substantially worse in the years ahead. Neither Daubert nor Frye are capable of handling the evidentiary demands that courts increasingly face.

Given that my goal here was just to touch on what I want to talk about in the weeks to come, I think I'll stop here. This is an issue I've been thinking about for a while now, and I am looking forward to seeing people's thoughts this.

Posted by John Pfaff on April 2, 2009 at 11:56 AM in Peer-Reviewed Journals, Research Canons, Science | Permalink | Comments (9) | TrackBack