Friday, May 22, 2020
Concluding the Legal Discontinuities Online Symposium
With thanks to all participants and commentators (and Howard and the folks at Prawfsblawg), we can now bring our two-week Legal Discontinuities Online Symposium to a close. If you've been busy with grading or pandemic issues or just life in general, you can find all the posts right here.
As readers will have noticed, issues about lumping/splitting, smoothness/bumpiness, aggregating/disaggregating, and winner-take-all-or-nothing come up throughout the law. While different contexts raise different details, we gain a lot by looking for the heart of the issues across a wide-range of legal doctrines--an exploration the legal academy has barely begun considering its centrality to the law. I believe this symposium has advanced that exploration, and we do so even more in our collection of papers that will be published under generous open access terms (roughly in January) by the fantastic editors at Theoretical Inquiries in Law, affiliated with the Cegla Center for Interdisciplinary Research of the Law at the Buchmann Faculty of Law, Tel Aviv University. Thanks again!
Posted by Adam Kolber on May 22, 2020 at 08:01 AM in Adam Kolber, Symposium: Legal Discontinuities | Permalink | Comments (1)
Thursday, May 21, 2020
Optimal Categorization
Posted on behalf of Ronen Avraham as part of the Legal Discontinuities Online Symposium.
Fennel’s excellent paper deals with the problem of “optimal categorization”. Using one of her examples- insurance and the problem of adverse selection- the question is whether in order to combat adverse selection insurers should “split” – divide the pool into more homogeneous and smaller risk pools, or “lump”- sell insurance on a group-based basis.
There are two features that create the problem of adverse selection: the asymmetric information between the insurers and insureds and the strategic behavior by insureds (selecting in and out of the pool). These are, as we have learned from Ronald Coase, two types of transaction cost. Put differently, categorization (splitting or lumping) is a solution to the general problem of transaction costs.
Indeed, understanding the type of costs involved in the underlying problem can better help with designing solutions. For example, if asymmetric information is a necessary condition for adverse selection to emerge, then disclosure and underwriting might be the necessary solutions- they bridge the asymmetry of information. Indeed, insurance companies and insurance law make lots of effort to bridge the information gap between the parties. Insurers are allowed to ask questions which infringe on our privacy in order to better understand the risk insured. Later they are allowed to even deny coverage from those insureds who have misrepresented their risk, in order to deter applicants from hiding important information about the risk insured in the first place. If there are cheap ways to close the informational gap (such as imposing duties to disclose information to insurers) then going granular, i.e. splitting, might be more efficient.
Here is another example for why understanding what the underlying problem is, is crucial. This time it is about the other problem that creates adverse selection- strategic behavior. If allowing insureds to choose whether to select in or out of the pool is a problem then restricting choice might be the solution. And, restriction comes in many flavors: First, we can make insurance mandatory as is done in many countries w.r.t health insurance. Or, second, we can make some basic coverage mandatory and then have another layer of coverage which is optional- with no ability to negotiate the terms of coverage on the one hand (less choice), but with some limited opportunity for bridging the information gap on the other, such as allowing insurers to ask about one’s age, smoking behavior and such. This is also done in many countries w.r.t, again, health insurance. Or, third, we can allow for the terms of an even higher layer of coverage to be totally free for negotiation between the parties, but with unlimited underwriting, as is done in many countries w.r.t to private health insurance. The result is that we get health insurance coverage, which is lumpy and mandatory at the bottom and granular and free at the top.
So we get a seesaw between accounting for asymmetric information and accounting for strategic behavior: If there is a cheap way to restrict strategic behavior then we no longer need to worry about the informational gap between the parties: Once everyone is required to purchase insurance, we no longer worry about adverse selection and therefore need not have individual underwriting. And vice versa- if we don’t want to paternalistically require people to purchase insurance, then we must allow for underwriting.
Obamacare was based on this understanding of the seesaw, adopting a law where some underwriting is allowed (for example for those who smoke) with a soft requirement to purchase insurance (penalty/tax for those who don’t). The SCOTUS understood it and upheld it. The trump administration did not get it. It was able about a year and a half ago to sneak in legislation which abolished the soft requirement to purchase insurance and now it needs to deal with millions of American, worrying about losing their health insurance due to underwriting.
Go read Fennel’s paper.
Posted by Howard Wasserman on May 21, 2020 at 09:31 AM in Symposium: Legal Discontinuities | Permalink | Comments (1)
Wednesday, May 20, 2020
From Severed Spots to Category Cliffs (by Lee Anne Fennell)
Posted on behalf of Lee Anne Fennell as part of the Legal Discontinuities Online Symposium:
The New-York-based MSCHF recently acquired L-Isoleucine T-Butyl Ester, one of Damien Hirst’s spotted paintings, and sliced it up into 88 single-spot servings that sold for $480 a pop—more, in total, than the $30,485 purchase price of the painting. They then auctioned off the hole-filled remainder for $261,400. The whole, in this case, was apparently worth less than the sum of its parts (counting the added value of the stunt itself). While MSCHF’s “Severed Spots” project is a very literal example of how slicing up an asset can increase its value, it speaks to an issue that is ubiquitous in law, policy, and everyday life: the lumpy, discontinuous, all-or-nothing nature of many things in the world. Efforts to address such (apparent) indivisibilities underpin many market innovations and are also central to problem-solving in multiple spheres, from public goods to personal goals.
I explored the significance of configuration—whether dividing things up or piecing them together—in my recent book, Slices and Lumps: Division and Aggregation in Law and Life (which you can sample here). But the topic is huge, and the book could only scratch the surface of the many implications for law—an assortment of which received thoughtful attention in a University of Chicago Law Review Online book symposium (and here's my introductory essay). The daily news also contains constant reminders of how much lumpiness—and responses to it—matter to everyday life. Severed spots are an entertaining example, but more serious ones abound, from lumpy work arrangements that exacerbate gendered patterns, to the seemingly lumpy choices that public officials now face about whether—and what—to reopen.
My new paper, Sizing Up Categories, delves into another aspect of lumpiness: the all-or-nothing cliffs that categories generate. Categories break the world into cognizable chunks to simplify the informational environment, flattening within-category differences and heightening between-category distinctions. Because categorization often carries high stakes, it predictably generates strategic jockeying around inclusion and exclusion. These maneuvers can degrade or scramble categories’ informational signals, or set in motion cascades like adverse selection that can unravel markets.
I argue that high categorization costs can be addressed through two opposite strategies—making classification more fine-grained and precise (splitting), and making classifications more encompassing (lumping). Although the former strategy is intuitive, the latter, I suggest, is often more suitable. If category membership carries multiple and offsetting implications, the incentive to manipulate the classification system is dampened. To take a simple example, insurance that covers only one risk is more vulnerable to adverse selection than is an insurance arrangement that covers two inversely correlated risks. Making categories larger, more durable, and more heterogeneous can help to produce such offsets. These and other forms of bundling can arrest damaging instabilities in categorization.
Categories present just one context in which it is worthwhile to consider multiple approaches to discontinuities. To return to our starting point, serving up lumpy artwork to a wider audience typically proceeds not by cutting individual works apart but rather by bundling works together and enabling their shared consumption in an art museum. Such bundling does not just provide a more sustained viewing experience, it also allows individuals paying identical entry fees to effectively pay more for what they value more, and less for what they value less—a form of price discrimination found in many contexts. The same principle explains why health insurance that covers more risks, more of the life cycle, or even more people (such as entire families) may stand in for more fine-grained pricing of individual risks where the latter is not feasible or desirable.
Discontinuities are everywhere, but we need not take lumps as we find them. We should look for places where breaking them down—or building them up—will add value.
This post is adapted in part from a draft paper, Sizing Up Categories, to be published in a forthcoming symposium issue in Theoretical Inquiries in Law. The papers were part of the Legal Discontinuities conference held at Tel Aviv University Law School’s Cegla Center in December 2019.
Posted by Adam Kolber on May 20, 2020 at 08:00 AM in Symposium: Legal Discontinuities | Permalink | Comments (3)
Tuesday, May 19, 2020
Continuity in Morality and Law (by Re'em Segev)
Posted on behalf of Re'em Segev as part of the Legal Discontinuities Online Symposium:
Adam Kolber invites us to consider the following argument: (1) morality is usually continuous in the following sense: a gradual change in one morally significant factor triggers a gradual change in another; (2) the law should usually track morality; (3) therefore, the law should often be continuous (see, for example, here). This argument is motivated, for example, by claims such as these regarding the overall moral status of actions and agents: if a person who employs reasonable force in self-defense should not be punished at all, a person who uses defensive force that is just slightly more than what is proportional should not suffer a serious punishment; and if no compensation is required for harm that is the result of driving in a way that is reasonable, a driver that caused similar harm while driving in a way that is just slightly unreasonable should not be required to pay millions. In this post, I defend two claims regarding the first premise of this argument: (1) this premise is incompatible with the common view; (2) this common view is implausible. Thus, Kolber's argument is safe in this regard – but it is based on a minority view. (These claims are based on this paper.)
The first premise is incompatible with the common view that there is an important difference between actions that are right (obligatory or at least permissible) and actions that are wrong – even if this difference is due to a small difference in terms of underlying factors such as the consequences of these actions. For example, the standard version of (maximizing) consequentialism holds that the action whose overall consequences are the best is obligatory, whereas an action whose overall consequences are just slightly less good is wrong. And standard deontological theories claim that this is the case sometimes (when deontological constraints and permissions do not apply or are defeated). The common view takes this stark distinction between right and wrong actions very seriously. One example is the influential objection that he standard consequentialism is overly demanding in its insistence that only the action whose consequences are the best is permissible and all other actions are wrong. The extensive debate regarding this question demonstrates that the difference between actions that are permissible and actions that are wrong is commonly considered to be very important. If this view is correct, what seems like a small difference in the moral status of the actions in the self-defense and accident cases is in fact a momentous one: the difference between justified and wrongful defense or between reasonable and unreasonable risk. If this view is correct, very different legal outcomes in these examples are indeed called for and the first premise of the continuity argument is false.
However, my second claim is that this common view is indefensible and that scalar accounts of morality are more plausible. Consider, for example, the difference between standard consequentialism and scalar consequentialism. Both ranks states of affairs from the best to the worse. They differ regarding the deontic implications of this evaluation. Standard consequentialism adds that the best action is obligatory and every other action – including a very close second best – is wrong, while scalar consequentialism does not classify actions as obligatory, permissible, or wrong. (Satisfying consequentialism is similar in this respect to maximizing consequentialism, since it too distinguishes between right and wrong actions –merely in a different way: those that are good enough and that those that are not.) It seems to me that the scalar view is more plausible: it reflects all the morally significant facts, and only these facts, by ranking actions from the best to the worse while noting the degree to which each action is better or worse than every alternative, and accordingly the force of reasons for and against every alternative action, compared to all other possible actions. In contrast, the standard distinction between right and wrong actions assigns weight to facts that are insignificant or too much weight to trivial differences. Consider, first, the distinction between the best action and actions that are very similar to it in terms of all the underlying moral factors. For example, while scalar consequentialism grades the best action as perfect (A+, or 100%) and the second best action (whose consequences are just slightly less good) as almost perfect (A, or 98%, for instance), the standard view describes the latter action as wrong (F!) – although it is almost perfect in terms of all of the underlying factors. At the other end of the spectrum, standard consequentialism classifies all actions that are not the best together – as wrong – although are often huge differences between them: the second best option may be almost perfect whereas the worse option may be awful. Indeed, the latter difference –between the second best action and the worse action – is typically much more significant than the former difference – between the best action and the second best action. Since there are typically numerous alternative actions, and there are substantial differences between many of them, the scalar version evaluates common actions – which are typically far from both the best and the worse options – much more accurately than the standard version. Consider, for example, how much money should a certain well-off person give each month to the (most deserving) poor. Assume that giving US$1000 would have the best consequences, that giving US$990 would have consequences that are almost as good, and that giving nothing would have consequences that are very bad. The standard view considers giving US$1000 as obligatory and giving both US$990 and nothing as wrong.
One objection to the scalar view is that a moral theory should include not only evaluative components but also deontic components and specifically identify what should – and what should not – be done, and in this way provide guidance to people. This objection thus considers the best action as qualitatively and not only quantitatively different than the second best action, even if they are very close in terms of the value of the underlying factors, since the best action is the one that should be performed. At the other end of the spectrum, this objection insists that some actions should be classified as wrong, for example, torturing people for fun. However, it seems to me that the scalar view reflects all the morally significant information (and provides proper guidance) in these respects too. It says, first, which action is the best and accordingly which action there is most reason to do. It does not classify this action as obligatory but this classification adds no morally significant fact. The scalar view also notes, regarding each action, if it is better, or worse, relative to every other action, and by how much. Accordingly, it entails reasons for and against each action, compared to every alternative action, and points out the force of such reasons: the reason to prefer one action over an alternative one may be (much or just slightly) stronger, or weaker, than the reason to prefer the former action to a third alternative action, for example. In contrast, the standard version adds propositions that are either redundant or mistaken, depending on how terms such as "right" and "wrong" are understood. Classifying actions based on such terms, in a way that goes beyond the information provided by the scalar view, is not only redundant but also arbitrary and misleading, since it implies that such information does exist.
A final clarification: the above controversy concerns the theoretical question of what is the accurate way of depicting the relevant part of morality – and not the practical question of what is the most useful or natural way of talking. It may well be more useful sometimes to use terms such as “duty” and “wrong” – for example, when this convinces people (who are not perfect) to act in better ways. But this is irrelevant to the present discussion. (Therefore, to the extent that the above objection is concerned with guidance in this practical sense, it is irrelevant.) Similarly, it may simply be more natural to depict certain actions as, simply, right or wrong (rather than, for example, the best or very bad), especially since often only a few options are salient (most options are not considered seriously or even noticed) and it is sometimes clear that one of the salient options is much better, or worse, than the others. In such cases, it may be less cumbersome to describe the relevant actions as "obligatory" or "wrong", rather than as better and worse, to various degrees, compared to all other alternatives. But this does too does not affect the conclusion that the latter option is the more accurate one (considering, for example, the fact that with regard to most actions that are naturally described as wrong, there are even worse alternative actions).
This post is adapted from a draft paper, Continuity in Morality and Law, to be published in a forthcoming symposium issue in Theoretical Inquiries in Law. The papers were part of the Legal Discontinuities conference held at Tel Aviv University Law School’s Cegla Center in December 2019.
Posted by Adam Kolber on May 19, 2020 at 08:00 AM in Symposium: Legal Discontinuities | Permalink | Comments (6)
Monday, May 18, 2020
Line Drawing in the Dark (by Adam Kolber)
Posted as part of the Legal Discontinuities Online Symposium:
Suppose one hundred women line up by height, and you must decide exactly where along the line the women are “tall.” Aside from the familiar (sorites) problem of distinguishing between women very close in height, there is also a problem of meaning. You might very well ask: How tall? Tall for what purpose? To reach the top shelf of some particular closet? To play professional basketball? Absent information about the purpose of the cutoff and what it signifies, it is difficult to draw a meaningful line. When we draw lines across spectra with little information to guide us, I call the creation of such cutoffs “line drawing in the dark.”
Turning to law, many jurisdictions follow the Model Penal Code in recognizing a spectrum of recklessness that can make an instance of homicide either manslaughter or murder. At trials where a defendant’s conduct could plausibly constitute either manslaughter or murder, it will usually be the jury’s job to draw the line between the two. For example, jurors will be asked to decide whether a driver murdered a pedestrian by driving “recklessly under circumstances manifesting extreme indifference to the value of human life” or whether the driving did not manifest such extreme indifference such that the defendant should be convicted at most of manslaughter.
Of course, the line between these two kinds of homicide isn’t carved by nature. Holding all else constant, the appropriate amount of punishment seems to increase smoothly as a defendant’s mental state becomes increasingly reckless (or, if you prefer, as evidence of that recklessness increases). For example, one might gradually increase punishment to reflect greater culpability or need for deterrence. To decide between manslaughter and murder, we must draw a line at some point and call certain reckless homicides “manslaughter” and others “murder.”
Many courts recognize that manslaughter and murder can exist along a spectrum of recklessness. Telling us to draw the line where recklessness represents “extreme indifference to the value of human life” reveals little about where along the spectrum the cutoff is located. Some conduct will be reckless in ways that manifest a little, a good bit, or even a lot of indifference to the value of human life before creeping right up to the line where extreme indifference is manifested. The language of “extreme indifference to the value of human life” adds little shared meaning, other than establishing that a spectrum exists.
According to one appellate court in Washington state, “extreme indifference” “need[s] no further definition.” (State v. Barstad, 93 Wash. App. 553, 567 (Ct. App. Wash, 1999).) According to the court, “the particular facts of each case are what illustrate its meaning,” and “[t]here is no need for further definition.” This view gets matters backwards. If jurors are supposed to apply facts to law, they need to know something about where the law draws lines. Jurors are not supposed to both evaluate facts and determine where the law should draw the line—particularly when they are given too little information to decide.
If recklessness came in clearly defined units, the law could specify precise places along a spectrum (call them “flagpoles”) where legal consequences change. Absent flagpoles, however, it’s not clear how jurors are supposed to complete their task. Recall the challenge to determine where one-hundred women in height order switch from “non-tall” to “tall.” Some might group the tallest 10% into the “tall” category, while others might group the tallest 40%. There’s simply no meaningful way to draw a line along a spectrum without additional information. Are we using “tall” to mean “WNBA tall” or “taller than average” or “likely to make people say, ‘Gee, she’s tall.’”?
There will be easy cases of “tall” for just about any purpose, just as there will be easy cases of murder or manslaughter. But for a wide range of cases, especially those likely to proceed to trial, we are asking jurors to locate a cutoff without meaningful information about how to do so. This is the sense in which we ask jurors to engage in line drawing in the dark. It’s not just that the task we give jurors is difficult, as it often will be. The manslaughter-murder cutoff seems essentially impossible to get right in any principled way because we withhold information required to promote retribution, deterrence, prevention, or whatever one take the criminal law’s goals to be.
We could try to add meaning through sentencing information. At least if jurors knew the sentencing implications of their decisions, they could decide whether the conduct at issue warrants one or another sentencing range. Perhaps jurors could draw meaningful distinctions if we said, for example, that manslayers in this jurisdiction receive zero to ten-year sentences and murderers receive eleven-year to life sentences. They might assess whether the defendant’s culpability (or dangerousness or some combination of factors) warrants a sentence greater or less than ten years and then select a conviction accordingly. Yet this is precisely the sort of information we have but ordinarily hide from jurors.
Line drawing in the dark can also occur when courts rely on precedents from other jurisdictions. Suppose a judge in State A lacks a clear precedent as to whether the case at bar presents sufficient evidence to constitute an extremely reckless murder as opposed to just reckless manslaughter. The judge might turn to precedent in State B to help decide, implicitly assuming that words like “murder” and “manslaughter” have the same or similar meaning across jurisdictions. But while they are rooted in a shared common law tradition, the tremendous variation in sentencing practices across U.S. jurisdictions casts doubt on the view that every jurisdiction means the same thing by “murder” and “manslaughter” even when they use the same statutory language to describe them.
Assume murderers in State A receive sentences of 11 years to life while manslayers in State A receive sentences of less than 11 years. In State B, by contrast, the division between manslayers and murderers is at the 15-year mark. Murder and manslaughter seem to mean somewhat different things in State A and State B. We cannot accurately compare the two offenses, particularly in cases that fall near the border of murder and manslaughter, without considering sentencing consequences. Homicide warranting ten years’ incarceration happens to be called manslaughter in State A and murder in State B.
Cross-jurisdictional comparison cases will only rarely be both substantively analogous and have sufficiently similar sentencing schemes to offer meaningful comparisons. I gave examples where the sentences for offenses along a spectrum do not overlap and have a clear boundary between them. In reality, such sentences (including murder and manslaughter) will often overlap to varying degrees and require more complicated analysis. Moreover, even when sentences appear the same in name, the jurisdictions will likely have prison systems with different levels of severity and different collateral consequences upon release. Even if two jurisdictions used absolutely identical sentencing regimes, they could vary in their relative punitiveness, meaning that we cannot assume they draw the same lines between murder and manslaughter simply because they punish them with the same prison terms. Taken together, these and related concerns cast doubt on the possibility of ever meaningfully comparing criminal cases across jurisdictions. In my draft paper, I argue that line drawing in the dark occurs in many places throughout the law, afflicting judges, juries, lawyers, and scholars.
This post is adapted from a draft paper, Line Drawing in the Dark, to be published in a forthcoming symposium issue in Theoretical Inquiries in Law. The papers were part of the Legal Discontinuities conference held at Tel Aviv University Law School’s Cegla Center in December 2019.
Posted by Adam Kolber on May 18, 2020 at 08:04 AM in Adam Kolber, Symposium: Legal Discontinuities | Permalink | Comments (5)
Friday, May 15, 2020
Changing Places, Changing Taxes: Exploiting Tax Discontinuities (by Julie Roin)
Posted on behalf of Julie Roin as part of the Legal Discontinuities Online Symposium:
President Trump’s decision to move his official state of residence from high-tax New York to no (income) –tax Florida has brought public attention to an issue that has long troubled scholars, as well as designers and administrators of income tax systems: how the interaction of tax rules deferring the taxation of income and tax rules based on residency allows taxpayers to reduce and even avoid taxation of their deferred income. These discontinuities in tax treatment may lead to excessive migration, as well as reductions in state income tax revenues.
Although trans-national moves of this sort are increasingly treated as “realization events” for tax purposes, triggering the immediate taxation of accrued but untaxed gains in the taxpayer’s country of original residence, the states of the United States have not tried to impose similar rules on residents moving to other states. This reluctance may stem in part from concerns that any attempt to do so would be struck down as a violation of the federal constitution’s Commerce Clause or its Right to Travel. But it may also stem from the fact that such a forced realization rule creates a different discontinuity; a tax rule accelerating the taxation of accrued gain penalize interstate movers (relative to those who stay put and continue to benefit from tax deferrals), disincentivizing such moves. Instead of too much interstate migration, there may be too little, interfering with both economic efficiency and what could be a valuable feedback mechanism about the performance of state governments. The same discontinuity problem arises in the international realm, of course, but there the difference in institutional structures and political sensibilities—not to mention larger revenue concerns due to higher tax rates—has led to a different policy outcome.
My paper analyses legal mechanisms or rules that might reduce both positive incentives and negative disincentives for interstate moves. A general move to mark-to-market taxation would eliminate the problem in its entirety. However, the practical and political obstacles to the uniform adoption of mark-to-market taxation for state tax purposes are significant; indeed, it is hard to see how this might develop without the federal government adopting mark-to-market treatment for federal tax purposes. And a move towards mark-to-market taxation by some states and not others would lead to a new set of discontinuities. The paper also analyzes the possibility of enhanced source taxation of nonresidents and of expanded taxation of part-year residents, only to encounter similar problems.
Ultimately, the paper concludes that this problem of discontinuous treatment is easy to identify but impossible to solve in a world in which state tax authorities rely on federal tax authorities for performing many of the hardest tasks involved in tax administration, while retaining considerable freedom to devise their own tax base definitions and set their own tax rates. There is a tax price to be paid for allowing states to be laboratories of democracy, catering to the heterogeneous desires of their populations.
This post is adapted from a draft paper to be published in a forthcoming symposium issue in Theoretical Inquiries in Law. The papers were part of the Legal Discontinuities conference held at Tel Aviv University Law School’s Cegla Center in December 2019.
Posted by Adam Kolber on May 15, 2020 at 08:00 AM in Symposium: Legal Discontinuities | Permalink | Comments (0)
Thursday, May 14, 2020
Should Trial Outcome Be Based on the Median or the Mean? (by Omer Pelled)
Posted on behalf of Omer Pelled as part of the Legal Discontinuities Online Symposium:
Factual uncertainty is a frequent problem in legal disputes. Whenever the parties disagree on the facts, the trier of facts, be it a jury or judge, must examine evidence and infer the facts. Commonly, the evidence provides limited information, making it impossible to determine the relevant facts with certainty. Thus, based on the evidence, the factfinder might consider several alternative factual states, each of which can be associated with a different likelihood.
In civil disputes we usually think that factfinders are required to adopt the most likely factual state, and ignore the rest, under the preponderance of the evidence rule. In statistics, this rule is equivalent to choosing the median value to describe the center of a distribution. Interestingly, when confronted with actual statistical data, for example when examining the lost income of an injured child, courts adopt another central value – the weighted mean. These two options – the median and the mean – can be implemented to any factual dispute. For example, in a tort case if the jury decided that the probability that the defendant was negligent is 40%, awarding zero damages is equivalent of choosing the median, and awarding damages equal to 40% of the harm is equivalent to awarding the weighted mean.
The choice between the median or mean is not limited to civil disputes. In criminal law, for example, when the punishment is determined by a three-judge panel, the law states that the punishment is determined by the median judge. E.g., is two judges supported a punishment of one year imprisonment, and the third thinks that the defendant should be imprisoned for four years, the punishment would be one year imprisonment (the median) and not two years (the weighted mean).
Each measure of central location – the mean or the median – has some appealing attributes. The median minimizes errors (in absolute values), making factual decisions most accurate. Furthermore, the median is much less sensitive to outliers, disincentivizing the parties from making wild factual claims. The weighted mean however, in many cases, creates better incentives regarding primary behavior.
Notice that the choice between median or mean has an important implication regarding the continuity or discontinuity of legal outcomes - When courts adopt the median, legal outcomes become less sensitive to changes in the probabilities, leading to the “all-or-nothing” feature usually associated with the legal process, especially when the court considers only two possible factual states. The value of the weighted mean, however, changes with every variation of the distribution, making the legal outcome continuous over the changes in probability.
In a forthcoming article in Theoretical Inquiries in Law, dedicated to discontinuity in the law, I argue that the choice between these two possibilities in civil disputes should depend on the normative goal of private law. If the law is designed to promote corrective justice, courts should always adopt the median outcome. If, however, the goal of private law is to create optimal incentives, it should sometimes adopt the weighted mean. In the article I show under what conditions the mean creates better incentives than the median.
This post is adapted from a draft paper to be published in a forthcoming symposium issue in Theoretical Inquiries in Law. The papers were part of the Legal Discontinuities conference held at Tel Aviv University Law School’s Cegla Center in December 2019.
Posted by Adam Kolber on May 14, 2020 at 08:00 AM in Symposium: Legal Discontinuities | Permalink | Comments (7)
Wednesday, May 13, 2020
Proof Discontinuities and Civil Settlements (by Mark Spottswood)
Posted on behalf of Mark Spottswood as part of the Legal Discontinuities Online Symposium:
Few areas of the law involve more “bumpiness,” as Adam Kolber would put it, than traditional burden of proof rules. Consider a jury that has heard enough evidence to think that a civil defendant is 49% likely to be guilty. Under existing law, they are expected to award precisely $0 in damages to the plaintiff. Add a tiny shred of additional evidence, which is just strong enough to push their confidence level in guilt up to 51%, and we instead expect them to award full damages. The evidence in the two cases is nearly identical, but the result is radically different.
Many scholars have previously questioned the optimality of this arrangement, especially in comparison with what I have called a continuous burden of proof rule. If we think of the traditional rule as a light switch, moving from 0 to full damages once its threshold point is reached, the continuous rule is instead like a dimmer switch. As the jury’s level of belief in guilt rises from 0% confidence to 100% confidence, a continuous rule incrementally escalates the level of damages they should award from $0 to the full amount of damages suffered by a plaintiff. Such rules have a number of attractive features. First, they provide better deterrence in cases where parties can foresee, when acting, how likely a jury will be to find them liable. Second, they spread expected outcome errors more evenly across parties, so that fewer innocent defendants pay the full amount of damages, and fewer deserving plaintiffs receive an award of $0. Third, they reduce the impact that various biases and other sources of unfairness may have on the judicial process. And finally, they may also reduce incentives that parties currently have to destroy evidence or intimidate witnesses into silence. But these benefits come at a cost. As David Kaye has shown, we should expect the traditional rule to produce a smaller amount of expected error at trial than the continuous rule, at least in single-defendant cases.
Of course, the preceding discussion ignores an important means by which parties themselves may smooth the law’s bumpiness, which is by settling their cases for an agreed-upon sum. Parties settle far more cases than they take to trial. Moreover, parties typically take expected outcomes at trial into account when making settlement decisions. As a result, if we seek to optimize expected trial outcomes in isolation, without attending to how our trial rules may alter settlement behavior, we may work unintended harm, either by undermining parties’ ability to avoid high litigation costs through settlement, or by incentivizing settlement amounts with higher rates of expected error. Thus, for my contribution to the Legal Discontinuities conference, I attempted to take some initial steps towards understanding how parties’ might change their settlement behavior if we shift from our traditional burden of proof rule to a continuous alternative.
I started with a simple economic model of the decision to settle cases and modified it to account for the ways that parties’ outcome expectations might vary depending on the choice of a burden of proof rule at trial. The main mechanism whereby different rules might affect the decision to settle, in this model, is by causing the parties to either reach more similar forecasts of their trial outcomes (in which settlement is likely) or to have more divergent expectations (which makes them more likely to take a case to trial). As the article shows, neither rule creates a greater or lesser settlement incentive across all cases. Instead, the traditional rule leads parties to have more similar outcome expectations in “easy” cases, in which an unbiased observer would expect a jury to find a probability of liability that was quite close to either 0% likely or 100% likely. But in cases with less certain outcomes, the continuous burden rule has the advantage, leading to more settlements in cases that are moderate or “hard” (i.e., an expected level of confidence in guilt that is close to 50%). Moreover, the continuous burden’s advantage in these cases is larger than the traditional rule’s advantage in easy cases. As a result, shifting to the continuous burden of proof would create an incentive to settle slightly more cases than we see under the present rule.
The paper also considers the fairness of the settlements that each rule produces, measured as the expected amount of error that each settlement contains, relative to a baseline in which each party gets exactly what they deserve. For reasons that are explored in the paper, the continuous burden produces settlements with a lower expected error rate than we see using the traditional rule. Interestingly, this benefit is concentrated in cases with relatively small amounts in controversy, and in fact the traditional rule produces more accurate outcomes in cases with more than $100,000 at stake. But since small cases vastly outnumber large cases in our actual legal system, we should expect a higher overall rate of error from the traditional rule.
Thus, for those who find settlement of cases to be a generally attractive policy, the continuous burden of proof rule lets us both have our cake (in the form of a higher settlement rate) and eat it, too (in the form of more accurate settlements). There is more in the paper (including analysis of a third kind of proof burden), but this blog post is already long enough. My one concluding thought is a cautionary one. This paper is meant to be a first step into understanding the role that continuous proof burdens can play in shaping settlement incentives. Ambitious scholars who are willing will find that there are many ways in which the present project could be extended. For myself, I am grateful to Adam and Talia for organizing a delightful conversation around legal discontinuities, which gave me the opportunity to shed at least a little light on some of these questions.
This post is adapted from a draft paper, Proof Discontinuities and Civil Settlements, to be published in a forthcoming symposium issue in Theoretical Inquiries in Law. The papers were part of the Legal Discontinuities conference held at Tel Aviv University Law School’s Cegla Center in December 2019.
Posted by Adam Kolber on May 13, 2020 at 12:01 PM in Symposium: Legal Discontinuities | Permalink | Comments (0)
Half the Guilt (by Talia Fisher)
Posted on behalf of Talia Fisher as part of the Legal Discontinuities Online Symposium:
Criminal law conceptualizes guilt and the finding of guilt as purely categorical phenomena. Judges or juries cannot calibrate findings of guilt to various degrees of epistemic certainty by pronouncing the defendant ‘most certainly guilty’ ‘probably guilty’, or ‘guilty by preponderance of the evidence’. Nor can decision makers qualify the verdict to reflect normative or legal ambiguities. The penal results of conviction assume similar “all or nothing” properties: punishment can be calibrated, but not with the established probability of guilt. In what follows I would like to offer a broadening of legal imagination and to unearth the hidden potential underlying a linear conceptualization of guilt and punishment, as it may unravel in the context of the criminal trial and in the realm of plea bargaining.
Probabilistic sentencing- namely, calibrating sentence severity with epistemic certainty- is one prospect for incorporating linearity into criminal verdicts and punishment. The normative appeal of probabilistic sentencing is rooted in deterrence and expressive considerations. From a deterrence perspective there is room to claim, that in cases where the criminal sanction generates a social cost that is a function of its severity (all incarcerable offenses) probabilistic sentencing can be expected to facilitate a higher level of deterrence as compared to the prevailing “Threshold Model”, for any given level of social expenditure on punishment. [1] As for expressive considerations: While at first glance calibrating sentence severity with epistemic certainty would seem to undermine the expressive functions of the criminal trial by severing the connection between severity of punishment and the force of the moral repudiation, closer scrutiny reveals that it could actually allow for refinement of the criminal trial’s expressive message. The question of criminality invokes the most complex and tangled categories dealt with in law, interweaving the descriptive and the normative. The prevailing “Threshold Model” dictates that the manifold aspects of criminality and criminal culpability be ultimately translated into the legal lexicon’s strict, one-dimensional terms of conviction or acquittal. But, such an impoverished conceptualization may result in the loss of valuable information. A probabilistic regime, in contrast, would allow for a more accurate reflection of the gray areas that permeate criminal culpability. Moreover, under the prevailing “Threshold Model”, acquittal covers a vast epistemic space, and therefore cannot effectively signal clearance of a defendant’s name. Under a probabilistic regime, on the other hand, the message of acquittal is of expressive meaning and significance, due to the narrowing of the epistemic space it encompasses.
Beyond the normative appeal of probabilistic sentencing, there is room to claim that central criminal law doctrines and practices already exhibit its underlying logic, and effectively deviate from the “Threshold Model” ideal, with its “all-or-nothing” binary outcomes. Some of these doctrines, such as the residual doubt doctrine, create an explicit correlativity between certainty of guilt and severity of punishment. Other legal practices, such as the “jury trial penalty” and the “recidivist sentencing premium, forge an indirect reciprocity. Thus, the imposition of harsher sentences on convicted defendants who chose to assert their constitutionally-protected procedural rights—most notably, the right to trial by jury—has become routine practice in many American courtrooms. It can be understood as an expression of the link between certainty of guilt and severity of punishment. The relative gravity of sentences in jury trials is reflective of the elevated certainty as to guilt in the wake of a jury verdict, whereas the relatively lenient sentences in bench trials is due, at least in part, to a lower degree of epistemic confidence in the conviction (despite the fact that both convictions may surpass the BARD threshold). The same holds true for the recidivist premium. The additional information submitted post-conviction—regarding the defendant’s prior convictions—reinforces the convicting versict and pushes the probability of guilt, which has already been substantiated (as inferred from the mere fact of conviction), to a point that comes even closer to absolute certainty.
Moving to the plea bargaining arena: If criminal convictions were to assume linear properties, more features of the criminal trial- most notably the standard of proof- could be turned into negotiable default rules. The parties may opt for a lowering of the standard of proof in return for partial sentencing mitigation in situations where their marginal rates of substitution between (units of) sentencing and (units of) proof waiver equalize at an intermediate point between a full trial and a full plea bargain. Under those circumstances, deals for partial conversion of some units of reduction in the evidentiary demands in exchange for some units of punishment are likely to improve the situation of both prosecution and defense as compared to full conversion (plea bargains) or a full non-conversion (trial according to the beyond-a-reasonable-doubt standard). From a welfare perspective, changing the criminal standard of proof to a default rule would allow the prosecution and defendant to exchange sentencing mitigation for evidentiary waivers not only en bloc, but also in a more compartmentalized manner. Such expansion of the spectrum of choice as to the exercise of the right to have one’s guilt substantiated beyond a reasonable doubt,, can be said to facilitate the defendant’s welfare and choice-making capacities. It could similarly enrich the set of options made available to the prosecution in its pursuit of the social objectives underlying the criminal justice system. There is room to claim, in other words, that the normative considerations which support the institution of plea bargaining (and the contractual ordering of criminal arena, more generally) are further promoted when convictions assume linear properties.
[1] Henrik Lando, The Size of the Sanction Should Depend on the Weight of the Evidence, 1 Rev. L. & Econ. 277 (2005).
This post is adapted from a draft paper to be published in a forthcoming symposium issue in Theoretical Inquiries in Law. The papers were part of the Legal Discontinuities conference held at Tel Aviv University Law School’s Cegla Center in December 2019.
Posted by Adam Kolber on May 13, 2020 at 08:00 AM in Symposium: Legal Discontinuities | Permalink | Comments (2)
Tuesday, May 12, 2020
Probabilistic Disclosures for Corporate and other Law (by Saul Levmore)
Posted on behalf of Saul Levmore as part of the Legal Discontinuities Online Symposium:
Corporate law is a striking area where probabilistic information ought to be made widely available. This information often has elements of continuity and discontinuity, but it is at present withheld because disclosure is accompanied by the risk of liability when it is later discovered to have provided an inaccurate particle of information. It is not simply that corporate and securities law make use of continuous as well as discontinuous rules. For example, continuously defined controlling shareholders are subject to a strict fiduciary obligation, while discontinuously defined acquirers must make the government and the world aware of their holdings of more than 5% (depending on the jurisdiction, but almost all countries have such categories) of the stock of a corporation. Many areas of law are peppered with such contrasts. But corporate law, like the law governing products liability, medical malpractice, and other areas where disclosure is law’s centerpiece, is also home to rules that encourage vague information of limited usefulness. Ironically, disclosure rules discourage better disclosure. Much as a surgeon is encouraged to disclose that there is “some chance” that an operation will lead to death, corporations are encouraged to say things like “a lawsuit that has been brought against us presents some risk that our profits will decline.” In both cases, the better-informed insider could more usefully offer a series of probabilities, but current law discourages such disclosures.
When law requires disclosures, the product is often categorical. Investors might prefer probabilistic information, but it is often easier to make disclosures in binary form, though this is of little use to the audience. The key point here is that more useful disclosure opens the disclosing party to claims of misrepresentation because it is easier to err when revealing a great deal of probabilistic information than it is when making vague categorical information.
In 2019, Senator Elizabeth Warren, while attempting to be the Democratic candidate in the 2020 U.S. presidential elections, suggested that corporations should be required to disclose the fact that climate change might have an adverse effect on their projected earnings. Admittedly, the idea was not to inform shareholders about their investments, but to raise interest in climate change and to encourage greater political support for laws aimed at this problem. If shareholders thought that unmitigated climate change would affect their investments, they might be more inclined to pay higher taxes or sacrifice short-term profits in order to enjoy a more secure future. Warren’s idea was consistent with many disclosure requirements, as the suggested disclosures provided less information than they might have. A corporation is required to disclose knowledge of factors that might have a significant impact on the value of the firm. For example, firms regularly reveal the presence of lawsuits, and usually report (accurately, let us assume) that management does not expect the litigation it describes in its annual statements to have a significant impact on projected profits. The disclosure is not unlike that found to avoid products liability or to protect against claims brought against health-care providers for failing to disclose risks. There are risks, but disclosures often do not contain much information; firms issue vague warnings when they could disclose more useful information that they can easily obtain. For instance, a firm has probably calculated the risk attached to each lawsuit it faces, in order to decide how much to spend on defense or whether to settle a case on some terms. An optimist might say that present disclosures inform motivated recipients to investigate further, but usually the point of disclosure is, or ought to be, to lower the overall cost of information acquisition by placing the burden on the better-informed party, and especially so if this is likely to avoid duplicative information gathering by other, dispersed parties. The reality is that most disclosures are sensibly made as vague as possible in order to comply with the law, while avoiding ex post judgments that they were misleading or knowingly incomplete. “This product may contain peanuts” is much less useful than the information actually available to the producer of foodstuff, and the same sort of thing is true in corporate law.
Accounting practices, in particular, often provide less information than is actually available. An accountant might say that corporate disclosures were verified in compliance with generally accepted accounting standards, but this is presumably inferior to the accountant’s disclosure that “We investigated the corporation’s report of income and based on statistical sampling, we think it is 30% likely that the disclosure is accurate, 50% likely that income is somewhat greater than reported, but has been under-reported perhaps to avoid future lawsuits, and 20% likely to have been overstated (10% by an amount greater than $1 million).” If the corporation is later accused of misrepresentation, it will normally be safe if it adhered to “generally accepted accounting principles.” The corporation needs to fear litigation only if it intentionally misrepresented or held back information from the market or from the accountants. The reliance on accounting conventions is striking. In some cases, like the reporting of interest expenses, the accounting information is precise and readily compared to that produced by other companies and their accountants. But other information is vaguely specific. It is not surprising that a party considering the purchase of a company will investigate assets and past performance, and will rarely rely on accountants’ previous reports. Better information is plainly available to the prospective acquirer, but law seems satisfied or more comfortable with the (unnatural but available) demarcations provided by accounting conventions.
One solution would be for law to promise that so long as the information disclosed was as informative as that found in minimally compliant documents or announcements, then disclosing parties would find themselves in a safe harbor, protected from future litigation and discoveries that some pieces of information were inaccurate. The market might then encourage the provision of useful information. Similarly, I would prefer that my doctor tell me that an operation has a .04% chance of killing me and a 1% chance of requiring a blood transfusion, rather than being told “This procedure can lead to death or a need for a blood transfusion.” Indeed, I might like to see a curve representing the likelihood of various outcomes. There is, to be sure, the danger that the information provided will be inaccurate, through error or misbehavior by a subordinate, but the idea is for the disclosure to be protected so long as it provides more information than that offered in the familiar vaguely specific form. In the corporate context, a corporation would be within this safe harbor if, for instance, it gave probabilistic information about projected sales and costs so long as this information was superior to “We do not expect lawsuits against us to significantly affect our future, and the numbers offered here are reported according to accepted financial standards.” Accounting firms could be in the business of certifying that the probabilistic information revealed, even with inevitable mistakes, was at least as useful as the vague information that would comply with the law.
In short, there are areas where more information is available, and where investors and consumers could be given more useful information. They will receive this information if the provider is protected by a rule that recognizes that although more information is likely to contain more errors, it is still more useful than the vaguely specific statements that currently comply with law. Corporate (and securities) law is a good place to start experimenting with this idea for more useful disclosures.
This post is adapted from a draft paper to be published in a forthcoming symposium issue in Theoretical Inquiries in Law. The papers were part of the Legal Discontinuities conference held at Tel Aviv University Law School’s Cegla Center in December 2019.
Posted by Adam Kolber on May 12, 2020 at 08:00 AM in Symposium: Legal Discontinuities | Permalink | Comments (4)
Monday, May 11, 2020
Inputs and Outputs vs. Rules and Standards
I don't love the name “Legal Discontinuities.” Discontinuities are perspectival. For example, in countries with progressive income taxes, as your income rises by just a dollar above some often arbitrary cutoff point, your marginal tax rate can go from, say, 20% to 30%. If we think about the relationship between income and marginal tax rate, it looks discontinuous. On the other hand, if you look at the relationship between income and total taxes owed, as you go a dollar above the threshold, you owe just a little more than you owed when you were just below the threshold.
The lesson I take is that legal input-output relationships are the central issue: how do we map things that law cares about—inputs such as reasonableness, culpability, and harm caused—onto legal outcomes that law cares about such as compensation owed, fine amounts, and years in prison? As I see it, we begin with a theory of the relationship particular inputs and outputs ought to have and compare the theoretical relationship to the one the law actually gives them. In the example above, what matters is the relationship between income and taxes owed not income and marginal tax rate. Subject to some important caveats, the input-output relationships we see in the law should match the input-output relationships our best theories recommend.
When a gradual change to an input causes a gradual change to an output, I call that a smooth relationship. By contrast, when a gradual change to an input sometimes has no effect on an output and sometimes has dramatic effects, I call that a bumpy relationship. There are, however, an infinite number of ways to map inputs and outputs, and these are just shorthand names for two common types of input-output mappings. (In his conference paper, for example, Mark Spottswood discusses a logistic relationship which is one kind of smooth relationship).
We must speak of inputs and outputs because the vocabulary of “legal discontinuities” is inadequate. People easily confuse the continuity of inputs and outputs with the relationship between them. For example, in tort law, when you just cross the threshold of being unreasonably incautious, you now owe full compensation for the harm you caused. That’s a bumpy relationship because a gradual change to your level of caution has a dramatic effect on the amount you owe. This is true even though compensation is paid in the form of money which would naturally be described as a continuous variable. “Money owed” seems scalar though it’s used here as part of a bumpy relationship. So that’s why I think it’s fine to speak of inputs and outputs as scalar or binary or continuous, but those terms don’t do justice to what we really care about, namely the underlying relationships between inputs and outputs.
The smooth-bumpy distinction is sometimes confused with the rule-standard distinction, though they are conceptually quite different. The rule-standard distinction archetypically applies to the triggering circumstances of a particular law (or regulation or the like). If the triggering circumstances are well-defined, easy to apply, require little discretion, and so on, then we deem the law to be “rule-like.” For example, a law prohibiting driving above 65 miles per hour is very rule-like because it is clearly defined, easy to apply, and requires little discretion. If, however, the triggering circumstances are difficult to define in advance, require judgment to apply, give the decisionmaker substantial discretion, and so on, then we deem the law “standard-like.” For example, a law prohibiting driving at an “unsafe” speed is very standard-like.
To see the difference between the rule-standard distinction (which applies to triggering circumstances) and the smooth-bumpy distinction (which applies to input-output relationships), consider some ways to set up a dependent child tax credit. We could make the circumstances triggering the credit standard-like: you receive the credit when you have a “big” family. Or, we could make the triggering circumstances rule-like: you receive the credit when you have “four or more dependent children.” The question of how to trigger a tax credit can easily be analyzed as a rule-standard debate.
Either way, however, there is a separate question about how inputs into our tax credit analysis relate to outputs. The result could have a somewhat smooth relationship to the input: if you’re deemed to have a “big” family, you receive a $1000 tax credit for each dependent child you have. Or the result could be more bumpy: “big” families receive a $4000 tax credit no matter how many members they have.
Now, one might insist, the rule-standard distinction could also be applied to the consequences of crossing a legal threshold. If you either get $1000 per child or $4000 total, those solutions seem rule-like because they are easy to apply and don’t require discretion. We could alternatively have standard-like consequences that provide for either a “fair” amount per child or a “fair” total credit. Those used to focusing on rules and standards might say that one triggering circumstance (that can be rule- or standard-like) is whether a tax credit applies or not and then another triggering circumstance (that can be rule- or standard-like) concerns the magnitude of the credit. Fair enough.
The key point, though, is that even though the rule-standard distinction can be applied to both triggering circumstances for applying a law and triggering circumstances for selecting a result, nothing about the rule-standard distinction captures the relationship between legal inputs and outputs. So we could have a child tax credit as follows: “big” families receive $4000 total in credit no matter the precise number of children in the family. Such an approach would be standard-like in deciding what constitutes a large enough family and rule-like in deciding the amount of the credit. More importantly, it would be an odd law from an input-output perspective. Why would we have a threshold determination as to the size of the family and not make the credit depend on family size? That question is about the relationship between an input and an output and goes beyond the focus of the rule-standard distinction. The rule-standard and smooth-bumpy distinctions simply capture different issues and considerations.
This post is adapted from my opening remarks at the Legal Discontinuities conference held at Tel Aviv University's Cegla Center for Interdisciplinary Research of the Law from Dec. 29-30, 2019. Conference contributions will appear in an open access symposium issue of Theoretical Inquiries in Law.
Posted by Adam Kolber on May 11, 2020 at 11:01 AM in Adam Kolber, Symposium: Legal Discontinuities | Permalink | Comments (4)
Welcome to the "Legal Discontinuities" Online Symposium!
On December 29-30, 2019, the "Legal Discontinuities" conference was held at Tel Aviv University's Cegla Center for Interdisciplinary Research of the Law (here's the link to the program). We welcomed papers by Avlana Eisenberg, Lee Anne Fennell, Talia Fisher, Eric Kades, Leo Katz, Saul Levmore, Julie Roin, Re'em Segev, Mark Spottswood, and me. I was pleased to co-host the conference with Talia Fisher. Over the next two weeks, we'll share blog posts from most of these contributors as well as some of the commentators, such as Ronen Avraham and Omer Pelled.
What are legal discontinuities? Well, that's part of what the conference is about. They involve all sorts of ways in which small changes to legal inputs lead to dramatic changes to legal outputs. For example, Leo Katz has asked, "Why is the law so all-or-nothing?" and defended the view that the law is and must be so. By contrast, I have focused on the distinction between smooth and bumpy laws and argued that there are probably good opportunities to smooth the law and make it less all-or-nothing. It is truly a cross-disciplinary legal topic, as illustrated most vividly perhaps by Lee Anne Fennell's articles and recent book which address property law, environmental law, business law, and pretty much everything else. For my six-page opening remarks to the conference, click here.
When the topic of legal discontinuities has appeared on Prawfs in the past, Orin Kerr and others have asked how some of these issues differ from rule-standard issues. I tried to answer that in my opening remarks, and I'll post those thoughts later today. Then, we'll get started in earnest tomorrow morning with a blog post by Saul Levmore, former dean of the University of Chicago Law School, on probabilistic disclosures. All of the presenters' papers will be published in a forthcoming issue of Theoretical Inquiries in Law. The journal has kindly allowed us to present this online symposium (and eventually publish the final papers) under generous open access conditions. Many posts will link to their current iterations on SSRN or elsewhere. We look forward to participation from conference authors, their commentators, and Prawfsblawg viewers like you!
Posted by Adam Kolber on May 11, 2020 at 08:12 AM in Adam Kolber, Symposium: Legal Discontinuities | Permalink | Comments (5)
Sunday, May 10, 2020
Tomorrow Morning: "Legal Discontinuities" Online Symposium
A quick heads up: we'll begin the Legal Discontinuities online symposium tomorrow. Stay tuned for more details in the morning!
Posted by Adam Kolber on May 10, 2020 at 01:03 PM in Adam Kolber, Symposium: Legal Discontinuities | Permalink | Comments (0)
Wednesday, April 29, 2020
May Events
As we enter May from the longest April in memory, I am pleased to welcome returning guest Adam Kolber (Brooklyn). In addition to his regular posts, Adam will run an online symposium on "Legal Discontinuity," based on a conference he organized in Tel Aviv in December; the symposium will begin around May 11. Participants include Saul Levmore, Lee Fennell, Ronen Avraham, Re'em Segev, Talia Fisher, Eric Kades, Julie Roin, Omer Pelled, and perhaps others. Details as the start date draws near.
In the meantime, please welcome Adam back to Prawfs.
Posted by Howard Wasserman on April 29, 2020 at 02:32 PM in Blogging, Symposium: Legal Discontinuities | Permalink | Comments (2)