Wednesday, July 26, 2017
Why the "Plain Statement Rule" for Statutory Interpretation is Normatively Justifiable But Practically Impossible
As Will Baude and Ryan Doerfler note in their elegant and provocative essay, the “plain statement rule” for statutory interpretation is a puzzling idea. The puzzle arises out of the PSR’s giving lexical but not absolute priority to text. Under the PSR, extra-textual sources are excluded from consideration some of the time but not all of the time. So long as text is “unambiguous” (or “plain”), the PSR excludes any consideration of extra-textual materials (e.g., statutory title, preamble, purpose, or legislative history. If the text is “ambiguous” (less than “plain”), however, then the courts can bring in all of that excluded stuff to resolve the ambiguous term’s meaning.
Baude and Doerfler note that such a rule of lexical priority contradicts common sense. If the extra-textual stuff is helpful, then we should always consider it. If it is pernicious (e.g., misleading, confusing, excessively costly to collect and construe, etc.), then we should always exclude it. They canvas some possible rationales for this partial priority for text, come up with a few candidates, but note that all of these possible justifications require a lot of normative and empirical assumptions that few proponents of the PSR bother to defend, Their bottom line: Fish or cut bait: Either exclude the allegedly pernicious extra-textual stuff even when text is ambiguous (deciding cases, I suppose, by arbitrary tie-breakers such as a coin toss), or, if one is a purposivist, bring in the extra-textual stuff without limit, balancing it against text’s probative value even when the text is “plain.”
As I say, their essay is elegant and provocative, and it does not detract from my admiration that I think Baude and Doerfler may have left out a possible justification for the PSR – viz., the idea that doctrines of statutory interpretation have conflicting procedural and substantive goals. As I suggest after the jump, Baude’s and Doerfler’s arguments are not good against this process-based justification for the PSR. Nevertheless, I think that their bottom line is likely correct, because the psychological acrobatics required from judges by the PSR likely exceed any process-based gains that the rule of lexical textual priority provides.
1. The Process-Based Argument for the PSR
Suppose that courts want to encourage legislators to express their purposes in semantically plain statutory text for process-based reasons. Judges might believe that, because such text undergoes the deliberation and transparent voting allegedly insured by the U.S. Constitution’s procedures in Article I, section 7, legislators who express their purposes through such text ought to be rewarded with a “PSR premium”: Any legislative goals expressed in semantically “plain” text automatically trump rival goals expressed extra-textually (e.g., in legislative history, statutory title, statutory purpose, and the like). Legislators that want their goals to prevail in the courts, therefore, have an incentive to use such semantically plain text over rival and more oblique methods of expressing legislative policies (through, for instance, committee reports, floor speeches, statutory titles, findings in a preamble, and so forth).
The value of procedural formalities in law-making, however, is not infinite. Sometimes it is just too costly for Congress to reach consensus about the exact wording of statutory commands. In such cases, the PSR gives legislators the option of side-stepping formal procedures by using ambiguous language the interpretation of which requires extra-textual sources that have never been properly ratified. Legislators who take this option abandon procedural formality and give up the PSR premium, but this sacrifice of procedural formality does not mean that there are no other values that the courts might reasonably seek to advance. Those other values might include capturing the likely intentions of the legislature. With ambiguous text, therefore, courts reasonably bring in rival values that would otherwise excluded by the judicial award of the PSR premium – i.e., automatic victory to plain text. Lexical but not absolute priority for plain text is thereby justified, QED.
2. Why Baude's and Doerfler's arguments Seem Ineffective (to me) Against the Process-based Justification for PSR
Baude’s and Doerfler’s essay considers a justification for the PSR analogous to the process-substance argument above. They note that the distinction between plain and ambiguous text could mark a trade-off between accuracy in capturing legislative intent and predictability in the law’s application. Plain text might not accurately capture legislative intent, but such text’s interpretation is so easy for regulated persons to predict that courts should enforce the plain letter over the more accurate spirit. As text becomes murkier, the value of providing clear guidance to regulated parties is no longer advanced by sticking with text alone, so courts reasonably focus on capturing the rival value of accurate legislative intent through extra-textual sources.
Baude and Doerfler are, however, skeptical about this “trade-off-between-rival-values” justification for the PSR, because the threshold between “plain” and ambiguous text is not itself so very plain. Ambiguity is the most ambiguous of concepts: The lexical priority of text can be turned off and on like a spigot by judges who simply declare by fiat that text is, or is not, “plain.” Thus, the PSR provides very little benefit for, and is therefore hard to justify by, the value of legal predictability to the consumers of law.
Conceding that Baude and Doerfler have knocked over a scarier-than-a-scarecrow argument, I nevertheless think that their rebuttal of the predictability-accuracy justification does not lay a glove on the process-substance justification. Even if the line between plain and ambiguous text is unpredictable, that line can provide an incentive for legislators to use plain(er) text. The value of the PSR premium will be discounted by the probability that courts will declare the text to be insufficiently “plain,” but a discounted prize can still be a big incentive for costly behavior (as any lottery ticket buyer can tell you).
3. Why Courts Do Not and Should Not Follow a PSR
So is the PSR defensible? I doubt it. The problem is that judges cannot really apply the PSR, because there is no effective system in place for insulating a textually minded judge from knowledge about a statute’s likely policy purposes. In making that threshold decision about whether text is plain enough to award that PSR premium, therefore, judges inevitably think about whether the text makes good policy sense. As Baude and Doerfler incisively note, “a court’s perception of what Congress is trying to say depends in large part on that court’s understanding of what Congress is trying to do.” Because lawyers make their arguments in the alternative, information about extra-textual stuff will always be available to a judge pretending to apply the PSR. The judge can adjust their finding of clarity or ambiguity based on her assessment of that extra-textual stuff without ever departing from the formal lexical priority of text. In their actions if not in their official justifications, therefore, courts evaluate textual clarity on a sliding scale, balancing it against the bad policy consequences that the allegedly plain text would inflict. (As Abbe Gluck notes, King v. Burwell has become the classic illustration of this sliding scale in finding ambiguity).
We have, in short, a Potemkin PSR. As Farnsworth, Guzior, and Malani note, the ambiguity of ambiguity insures that this PSR is likely honored in judicial boilerplate but not actual case outcomes. That does not mean that courts do not reward lawmakers for putting their preferred policies into statutory language. It means only that this reward is discounted by the court’s perception that those clearly expressed policies are not plausibly supported by extra-textual common sense.
In other words, we never really have had any PSR: We have always had an ill-defined presumption in favor of policies “clearly” baked into statutory text, a presumption that is always subject to challenge with extra-textual sources used sureptitiously to undrmine the text's clarity. But do not mourn the PSR: By providing such a well-written and creative dissection of this oft-cited and rarely followed "rule," Baude and Doerfler have helped us see why the its non-existence is likely a good thing.
Posted by Rick Hills on July 26, 2017 at 05:41 PM | Permalink
Unfortunately, I don't have time (or free hands) to give these generous responses the time they deserve right now, but I am very happy that our article has sparked fascinating responses like these. Provoking real reflection about the plain meaning rule was our goal, more than anything else.
That said, two thoughts:
1, I think Rick's process-based theory relies on an additional assumption, which is that legislators can at least tell when they are getting closer or further from qualifying for what he sees as the plain-meaning "prize." That might be right, but I'm not positive -- for instance, sometimes courts find ambiguities precisely because of Congress's attempts to say things an extra time and make doubly sure. If members of Congress don't know whether a given amendment will move things closer or further from clarity, then the "prize" could be discounted all the way to zero.
Now maybe Rick's assumption is right in at least some predictable set of cases, I'm not sure, and suspect we would need more systematic research to find out. And for the reasons Rick comes to, it may not matter in the end.
2, As for Asher's suggestion, if the result of our piece is to prove that we are all still intentionalists (or perhaps "new purposivists") I think that would be interesting enough. But I rather doubt that the kind of intentionalism he describes would lead to the kind of thresholds and epistemic cut-offs that *purport* to mark the plain meaning rule.
It's important to remember that "ambiguous" texts include not just those that are genuinely in equipoise, but those where one side has the better of it, but only by a modest margin. The question for plain-meaning-rule defenders is why there are some margins where extra-textual evidence is worth looking at, and others where it is not.
But if they have an answer to that question, perhaps they can help tell us where that margin is -- a point on which current judges of the D.C. Circuit apparently are widely divided.
Posted by: William Baude | Aug 6, 2017 12:36:01 PM
I think I disagree with most of this post. In the first place, I find Baude and Doerfler's criticisms of the plain-statement rule thoroughly unconvincing and see no common-sense problem with lexical priority of textual meaning whatsoever. The argument assumes that the reason extra-textual evidence of meaning is given no weight when text is clear is that there's something, as you say, "pernicious" about it, but that needn't be the case for lexical priority to be right; actually, unless every judge who considers that evidence when a statute is ambiguous but not when it's plain (which is pretty much every judge in America, less a handful of ultra-textualist judges) has been hopelessly confused for the past few decades or maybe the last century (the plain-statement rule predates the advent of modern textualism by quite a ways), what we should be looking for is a theory on which extra-textual evidence isn't pernicious at all, but is merely less reliable than textual evidence of meaning, which is conclusive when sufficiently clear.
Now, there's nothing logically incoherent in theory about that kind of priority between two different types of evidence; eyewitness testimony, for example, isn't so unreliable that it should be categorically discarded, but if you have physical or video evidence of a certain quality, it would be irrational to side with the eyewitness testimony if physical or video evidence contradicts it. District-court opinions are perfectly probative though inconclusive evidence of the law, so to speak, until they're contradicted by a Supreme Court opinion, in which case they're not. The question we should be asking is whether there's at least a plausible theory of legal meaning, or linguistic meaning, which contributes to legal meaning but isn't necessarily the same thing, on which textual evidence and extra-textual evidence of statutory meaning have this kind of relationship. I'll briefly suggest why I think there is a perfectly plausible theory of linguistic meaning, and perhaps legal meaning, of this kind.
It strikes me that by and large most judges continue to be intentionalists; otherwise the vast majority of judges wouldn't continue to consult legislative history, or at least they'd do it for very different and more limited purposes (as weak evidence of what words mean, on a par with other indicia of usage). It also strikes me that outside of the law and literature we're all intentionalists about meaning; to the extent your post or my comment is ambiguous, people who want to know what they mean will puzzle over what we meant by them, or perhaps ask us. They won't try to work out ambiguities by figuring out what's the most conventional meaning of ambiguous language in these writings. That being said, people who are intentionalists about meaning have had to concede that purely intentionalist accounts of meaning must bow to conventions to the extent we write some words that are intended to convey one thing but don't convey it on any conventional understanding of those words.** If you privately intended this post to mean that the plain-statement rule is great and should be retained, no one would say that that's what the post meant. If the post is ambiguous between two possible meanings that the words you used can conventionally bear, however, many people would say your post means whichever one you intended. I'm extremely reluctant to reject this theory of meaning as incoherent because it's how most people think about meaning, and I believe it's what explains the plain-statement rule, which I don't think is primarily a creature of textualist theory but rather is a very old rule that's coexisted for centuries with a frankly intentionalist approach to statutory interpretation.
Now, as to the objection that the rule can't stop courts from thinking about extra-textual stuff, that's true, and it's also true that courts may adjust their clarity thresholds if the statute's apparently clear meaning is a policy disaster. I do think, though, that the rule meaningfully deters courts from relying on, though not looking at, legislative-history materials when a statute is clear, meaningfully deters courts from relying on policy misgivings they have or litigants raise when a statute's clear and the policy concerns raised don't rise to the level of King v. Burwell-like disaster, and do make outcomes somewhat more predictable. The fact that clarity/ambiguity determinations can be tough doesn't mean that a great many aren't easy, and even if they were very frequently tough, it seems to me that freeing courts to balance clear text against other things would be a whole lot less predictable than handicapping the threshold question of whether a statute's clear or not. Take, for example, Easterbrook's famous opinion in In re Sinclair, where Congress enacted a new chapter under which family farmers could file, said the law "shall not apply . . . to [bankruptcy] cases commenced . . . before the effective date of this Act," and a pre-effective-date chapter 11 filer tried to convert his case into the new, more generous chapter because the conference report equally clearly said that cases pending at the time of enactment could be converted. Now, I don't find the clarity/ambiguity determination in that case hard at all and I can't imagine anyone who would, but I couldn't begin to predict the outcome of the case were it permissible to consider the conference report in spite of the statute's clarity.
** See pages 163-66 of this textbook:
Posted by: Asher Steinberg | Aug 1, 2017 8:53:03 PM