« Sponsored Post: Teaching Bus Orgs in the real world | Main | Cosmic injunctions »

Wednesday, April 25, 2018

What is Moral Risk?

Suppose you're rather sure that eating meat is perfectly fine. Indeed, you're 80% confident that non-human animals have no right to life and no great harm occurs when they are slaughtered for food. So you can go on eating meat, right? Not so fast. It would only be rational to consider what follows given your 20% confidence in the possibility that you're wrong. Plausibly you might assess the moral harm of being wrong as quite severe. If you're wrong, let's assume you believe, slaughtering animals for food is a great evil, perhaps almost as serious as slaughtering humans for the same reason. 

So here's how things look to our hypothetical person: He's 80% confident that eating meat provides some pleasure and nutrition and is not a significant moral harm. But he's also 20% confident that eating meat is a great evil, not far from being as serious as murder-cannibalism. Now it seems irrational for him to eat meat. If I was 80% confident that opening a box would yield $10,000 for me but 20% confident it would explode and kill me, I'd better not open the box. It's not worth the risk. Why should we analyze these problems any differently when they involve prudential considerations (money vs. explosions) than when they concern moral considerations (pleasures/nutrition from eating vs. harms akin to murder and cannibalism). So, even if our hypothetical person is rather confident that eating meat is perfectly fine, it might be irrational for him to eat meat anyhow, given his levels of confidence and his weighting of the relative harms. That's what makes moral risk important. In our deliberations, it seems that we should consider not only what we believe is moral but what risks we are taking about what is moral as well.

What does this have to do with the law? In a just-published article, I argue that moral risk should lead us to be very skeptical of retributivist justifications of punishment that claim we should punish people because they deserve it for past wrongdoing. Most retributivists find it far worse from a moral perspective to punish an innocent person than to fail to punish someone who is guilty. This asymmetric weighting of moral risks leads them to require a rather higher standard for factual guilt (the beyond-a-reasonable-doubt standard). But as I'll discuss in an upcoming post, I don't think we can plausibly have sufficient confidence in retributivism to overcome the rather high level of confidence that retributivists seem to demand in order to punish. In the meantime, here's Dan Moller on abortion and moral risk and here's Alex Guerrero on moral risk and eating animals.

Posted by Adam Kolber on April 25, 2018 at 04:07 PM | Permalink

Comments

@Patrick O'Donnell.

Well stated. I seconded the relevance of Pascal's Wager to this topic only in a methodological sense, not in an epistemological or metaphysical sense. So when I referred to an "ultra rational" Pascal I was talking about his argumentative approach, not his own belief.

Posted by: James | Apr 27, 2018 12:03:07 PM

I think it's better to focus on the philosophical arguments themselves rather than try to assess their quality based on speculation about what motivates those making the arguments. I'm with you, though, on the regrettable closed-access nature of much philosophical writing. Here, at least, is a near-final version of Moller's piece on SSRN:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1112275

Posted by: Adam Kolber | Apr 26, 2018 2:59:38 PM

Thank you again. I had noted the two sources you cited above, neither of which seems downloadable from those links as far as I can tell. From the abstracts it appears that both are advocating for this sort of reasoning, though apparently admitting the novelty of the concept - and most importantly, both seem probably not-that-great as philosophy, because so likely motivated by authors' desire to find a new clever argument in favor of a particular desired outcome (no abortion, no meat). And further googling has revealed that this "moral risk" thingy is a relatively novel debate, and very highly debated even among professional philosophers. I will keep doing what I am doing then.

Posted by: Sam | Apr 26, 2018 2:45:42 PM

Thanks, Sam. Here's one way to put it: Throughout the rest of our lives, when making important decisions, it seems natural to consider both the probability that we are wrong and the seriousness of the consequences of being wrong. Why would we not do the same about matters of morality?

What makes the topic of moral risk interesting is exactly what you noticed: people generally do not go around thinking about morality this way, or so it seems. The question is: why not and what might follow if they did? (For more on moral risk, you might consider two sources cited above, as well as the many more cited in the article itself.)

Posted by: Adam Kolber | Apr 26, 2018 2:35:45 PM

*not* incorporating

Posted by: Sam | Apr 26, 2018 2:25:29 PM

What I am saying is that the word "moral" does make a difference. Of course we take "what will happen" risks into account, with seatbelts etc. That is how all rational people think as you say.

But you are suggesting that we are morally bound to do a different type of thinking, which (I think) practically nobody other than you (maybe) does: not "what are the probabilities of what will happen" but (e.g.) "how shall i take account of the possibility that i am wrong in my belief about whether Meat is Murder?"

I thought that you must have thought it through enough to show me - a guy who actually never philosophized my way through this concept - that humans DO think this way naturally, or at least an actual argument why we should. Maybe not, though. Or maybe there is an existing literature that you can point me to, that lays out all the groundwork for this concept and shows that I am bad for nothing incorporating it into my moral life.

Posted by: Sam | Apr 26, 2018 2:24:38 PM

Sam: I don't make any claim about how people typically think about moral problems, but I do make claims about how people rationally ought to. We certainly do think about probabilities in our daily lives, otherwise people wouldn't wear seatbelts, take preventive medications, and so on.

If a person aims to be moral, I would they would want to consider the possibility that they are mistaken and the consequences of what would follow from being mistaken. This is how a rational person typically approaches important matters. Maybe reconsider the post as if it spoke of "risk" generically instead of "moral risk" and see if the word "moral" makes an essential difference.

Posted by: Adam Kolber | Apr 26, 2018 2:09:13 PM

The concept of "moral risk" as you are using it, here, seems not as well-grounded as you seem to want us to assume it is.

Specifically thinking of meat-eating and abortion, for example. Does anyone think this way, taking account of "moral risk" as you call it? Should they? I think that the answer is that people generally don't think this way and - less sure about this - they generally shouldn't, at least where they have (as you posit) anywhere close to 80 percent subjective certainty about their answer to a given "moral" question.

Can you give any example of normal human thinking that would show me that humans actually do use this concept of "moral risk" as you use it, or that they should?

This seems entirely different, to me, from questions of uncertainty about *outcomes*.

Posted by: Sam | Apr 26, 2018 1:58:35 PM

That's very helpful, Patrick. Thanks!

Posted by: Adam Kolber | Apr 26, 2018 8:26:23 AM

At this point, it’s perhaps rather beside the point, but a few observations on Pascal’s “wager.” He is not offering an argument for the existence of God (hence ‘the pragmatic argument’ characterization), the question of the existence of God being outside the province of rationally accessible knowledge, and thus Pascal himself (who was from being ‘ultra-rational,’ at least in matters of religion) never envisaged it as a possible or probable “valid foundation for religious belief” (in particular, for motivating belief in the existence of God). As John Cottingham has explained, Pascal’s appeal is to someone for whom spiritual faith would or should be a “destination,” albeit with ignorance of the road that takes one there. The road, as it were, is the path of religious praxis (acting as if one already has the requisite faith, and in so doing it may eventually take hold; recall Pascal’s famous observation that ‘the heart has its reasons, which reason does not know at all’). In this case, considerations of human happiness make their appearance as merely one possible motive for embarking on that journey (so the meaning and purpose of the wager are rather contextualized and constrained and thus it is not the grand and generalizable argument philosophers have often taken it to be; indeed, this reminds one why Pascal emphasized the need for figurative language in religion).

Recall that the context of “the wager,” in the words of Graeme Hunter, is to “persuade a generic unbeliever to live his life as if Christianity were true.” While the rewards for embarking on the path are ultimately “those of the next world,” they more vividly and with immediate relevance (which ‘emerge[s] by the end of his discussion’) are construed as “signal benefits to this present life” (Cottingham). Pascal understood spiritual praxis as involving a progressive transformation of our emotional attitudes (which makes space, so to speak, for divine grace). Hunter informs us that the unbeliever or religious sceptic will come “to see that his rejection of religion was in fact caused by the passions,” thus “he needs to weaken the passions that misdirect him and put more reasonable passions in their place. Pascal proposes a therapy to help him make that transition.” Or, as Cottingham (a well-known philosopher who is also a Catholic) further explains, Pascal’s wager falls into a philosophical and religious (or spiritual) tradition of “therapies of the soul” (what Martha Nussbaum calls the ‘therapy of desire’ in the case of Hellenistic philosophies like Stoicism, or instructions in what Pierre Hadot terms the ‘art of living’):

“…[T]he benefits that Pascal stresses at the culmination of his argument involve…progress in virtue and growth [as Hadot explains it, one renounces ‘the false values of wealth, honors, and pleasure, and turns toward the true values of virtue, contemplation, a simple life-style, and the simple happiness of existing’] towards contentment. ‘What harm will come if you make this choice?,’ he asks. You will renounce the ‘tainted pleasures’ of ‘glory’ and ‘luxury,’ but instead ‘you will be faithful, honest, humble, grateful, a doer of good words, a good friend, sincere and true.’ The carrot here is not so much carrot pie in the sky as the goal of beneficial internal transformation which is the aim of any sound system of spiritual praxis, you have much to gain, says Pascal, and little to lose.”

Posted by: Patrick S. O'Donnell | Apr 26, 2018 8:22:59 AM

YIKAM: It's good to remind ourselves of what you wrote above: whenever someone makes a morally normative claim, the nature and force of that normativity will depend on the deep and important topics you reference.

Marc: And a very welcome comment, too, as it sparked what I think is a useful discussion! (Incidentally, long time, no see!)

Asher: This is a phenomenal question, and I thank you for spending the time with the paper to come up with it. I have a lot to say about it, and it jumps a little ahead. So I will definitely come back to it in a post or a comment in the coming days (maybe first half of next week). To say one thing about it now, the epistemic challenge is indeed a challenge to all well-known proposed justifications of punishment. True, I happen to think it more forceful when applied to retributivism, but if I succeed in my claim that it successfully challenges retributivism, I'll count myself happy even if my subclaim about consequentialism convinces fewer people. But much more to come! This is just an IOU.

Posted by: Adam Kolber | Apr 26, 2018 6:54:19 AM

Ok, Adam. Just a comment noting vague similarities. For fun.

Posted by: Marc DeGirolami | Apr 26, 2018 6:49:34 AM

Adam: Of course you are correct that in order to write an article one must in essence put a fence around the subject and allow for some assumptions.

I suppose the better way to phrase my point is that an interesting aspect of moral risk to explore is whether it can exist at all. That's what I mean that, in order for moral risk to exist, it seems we would have to discover a morality of universal laws (X is always wrong, etc.)

Something for me to ponder...

Posted by: YesterdayIKilledAMammoth | Apr 26, 2018 1:36:14 AM

I think I agree with Larry Solum's objection to your arguments, or would object on related lines. You say that consequentialists should have moral uncertainty about aspects of their position that support their approach to punishment, but that that uncertainty is counterbalanced by concerns that, if they don't punish on account of that uncertainty, and if it turns out that the doubtful aspects of their position are in fact right, they will under-punish, which will lead to disvalue in the form of under-deterrence, under-incapacitation, and so on. That's true, I think.

Then you say the same isn't so for retributivists because they just don't worry about the consequential disvalues of failing to punish; they worry, at most, about failing to punish people who deserve it, which retributivists think isn't as bad as punishing people who don't. And that may also be accurate enough, as a description of the retributivist position.

What troubles me, though, about the argument is that you only argue that retributivists and consequentialists should only have these isolated doubts about certain dubious aspects of their positions, but not about their positions generally, which seem as dubious as those limited aspects. Shouldn't retributivists have moral uncertainty about the possibility that consequentialism is right generally, such that failing to punish is wrong in virtue of its causing disvalue? On the other hand, shouldn't consequentialists have moral uncertainty about consequentialism generally and therefore about whether failing to punish is particularly bad, or any worse than retributivists think it is?

I don't see why retributivists should be any more confident of the correctness of retributivism generally than they are of the existence of free will, which is one of the things you say they should have doubts about. And I also don't see why consequentialists should be any more confident about the correctness of consequentialism generally than they are about the equivalence of foreseen and intended consequences. It may be that if retributivists assign the right probability to the chance that consequentialism is correct, they would land on the see-saw that you say consequentialists are on; conversely (inversely? I inveterately mix these up) it might be that if consequentialists doubted their position to the degree they should, moral risk would inflect their views on punishment much more than you say it ought to. Perhaps both camps would converge at the same place -- or not, if both camps are relatively confident in their basic position.

I guess I have no idea how to decide how confident consequentialists or retributivists should be in their position, so it's very difficult for me to see where they should end up if they take your arguments to what seem to me to be their logical conclusion. I would wildly venture, though, that if you look around and notice that roughly half of the smart philosophers who work in ethics in roughly the same analytic way that you do have arrived at a position diametrically opposed to yours, you should conclude that there's a roughly 50% chance that you're wrong. And so if opinion is roughly split among analytic ethicists between retributivists and consequentialists, I would say that we should see a convergence in how moral risk affects how they think about punishment.

Posted by: Asher Steinberg | Apr 25, 2018 11:10:59 PM

The moral risk analysis above involves expected value calculations--the same sort of calculations used by casinos, car safety engineers, investment managers, and all of us in our lives to varying degrees. Whatever you think of Pascal's Wager, it's not a challenge to expected value reasoning. (Indeed, many challenges to Pascal's Wager are not about expected value calculations at all but about other relevant assumptions. For example, why think it more likely that an all-powerful deity rewards sycophantic cost-benefit calculators over bold visionaries who chart their own course in life?) I also don't think Pascal's Wager is a challenge to analyses of *moral* risk, since the Wager itself, at least as I remember it, is not about moral risk but prudential risk (e.g., "believe in an all-powerful deity because you will be rewarded for doing so" rather than "believe in an all-powerful deity because morality requires it"). So, the long and short of it, Pascal's Wager may have indirect connections to my discussion of moral risk, but no one has identified such a connection here.

YIKAM: I'm not sure exactly what you mean by "universal" morality, but to be sure, how we go about analyzing moral risk may depend on what one takes to be the nature of morality. My claims are simply taking people's moral views as a given. I stipulated the view of our hypothetical 80%/20% person. I then discussed what was *rational* for him to do given his pre-existing moral beliefs.

Posted by: Adam Kolber | Apr 25, 2018 9:41:53 PM

It seems one would first have to prove the existence of a universal morality before there could, indeed, be moral risk.

Posted by: YesterdayIKilledAMammoth | Apr 25, 2018 9:19:04 PM

@Marc...

wow..you read my mind, all two words of it.

And its worth noting in this context that Pascal's Wager is not considered a valid foundation for religious belief. It may have satisfied the ultra-rational Pascal but it pleases few others. The Catholic Church is explicit on this point. So to the extent that one views the legal law as flowing from the moral law Adam's point is out of order.

Posted by: James | Apr 25, 2018 9:06:57 PM

Pascal's Wager!

Posted by: Marc DeGirolami | Apr 25, 2018 6:23:41 PM

Interesting , yet , we have some issues here :

First , those hypothetical illustrations ,are mixing ,totally two different mental , factual , moral configurations . In that one with those animals slaughtered , the risk , is not really personal of course . That is to say , that the mistake , wouldn't cost basically nothing . The person contemplating it , doesn't even slaughter the animals by himself ( presumably ) . While , at the " explosive" one , the cost is absolutely personal , and would terminate probably the life of the person involved , rendering it , rather 100 percent risk in retrospect . The cost or harm , shall be irreversible then . So , talking is cheap , while being torn to pieces , well , that is a hell of difference .

Also , the respectable author of the post ,uses wrong semantic tools . For that one with the animals , the observer involved , is not confident or not . He doesn't need to be confident . He is simply contemplating and speculating about the general situation or issue . Confidence , or lack of it , or the extent of it , has to do , with the " explosive " issue. While the first issue ,has to do , mentally or psychologically with immediate natural , intuitive , spontaneous stance rather .

Finally , one can't deal with punishment , without unfolding comprehensive rationales for it . We lack here , two very basic :

To express the contempt of the society from the acts or violation of the law . And , to deter further others from violating the law . As such :

The perpetrator , becomes many times , a mean , over purpose . A mean for deterring others . That is to say , that even if a judge , would be sure , that full regret has been demonstrated , it wouldn't do ! He will only reduce the sentence , yet , impose it finally . Regrets , and absolute confidence that the transgression was one off , wouldn't do !! Such discretion , is a hell of game changer of course .

Thanks


Posted by: El roam | Apr 25, 2018 5:05:08 PM

Post a comment