Wednesday, January 5, 2011

Moral rules

Bennis, Medin, and Bartels have an article on "The costs and benefits of calculation and moral rules" in Perspectives in Psychological Science (vol. 5, no. 2, 2010), which I just became aware of. Bazerman and Green have a nice reply in the same issue. I want to say a few things that I think were not in the reply.

The paper concerns demonstrations such as omission bias, in which subjects in experiments using hypothetical cases often prefer harmful omissions to less harmful acts. The claim of these demonstrations is that subjects are making an error. In particular, in terms of consequences, the kinds of judgments they make would lead to more harm. An example is opposing a vaccination, even though the side effects of the vaccine are much less likely and no more serious than the disease it prevents. There are other examples.

The paper seems to alternate among several different claims, some of which are more clearly stated than others:

1. There is nothing erroneous about this judgment, even though in the real world it can lead to more harm. If it follows some moral principle, such as "do no harm", then it is right. This claim is not defended, and a defense of it would require an explicit rejection of arguments that I and others (such as Richard Hare) have made for a utilitarian standard for judgments. Nobody seems to bother with these arguments.

It does not suffice to point out that many people disagree with my conclusion. Academic debate is not done by opinion polls.

Moreover, as I have argued, even if, in some sense, it is morally right to make choices that lead to worse outcomes, the study of these judgments may still help us understand why outcomes are not as good as they could be, especially judgments that affect public policy.

2. The judgment is correct because it takes into account various things that were not in the scenario. It is thus correct in that it would NOT lead to worse consequences, once we fill in the details and consider all the consequences. (If it still leads to worse consequences even when these other things are considered, then we are back at my first point.) There are several findings that dispute this claim. In particular, many studies have asked subjects not just what they would do or should do but also what would produce the best consequences all things considered (e.g., Baron and Jurney, 1993; Baron, 1995; and several other papers). In these studies people admit that their choices do not produce the best consequences.

It is also possible that these extra factors are post-hoc rationalizations brought in to justify the initial judgment. But this has not been shown.

3. The judgments may be incorrect, but it would be too difficult to go through the "calculations" required for correct judgments. Although it is true that calculations are sometimes required, and sometimes people do not have access to their results, the scenarios under study do not require any calculation beyond seeing which of two numbers is larger, and sometimes not even that. Thus, we would expect that, in situations in which calculations are done FOR the decision maker, she would still ignore them, just as legislators often ignore the results of economic analysis or empirical research.

4. Sometimes it is best to follow simple moral rules even when it appears that breaking the rule will lead to better consequences. The paper cites examples from statistical judgment, and it is correct there. It can also be correct in the moral domain. Specifically, a good decision analysis will take into account the possibility that the simplest analysis is wrong. It will look at the effects of possible errors. Similarly, if you conclude that an act of terrorism is for the greater good, a thorough analysis of this decision would require that you consider the possibility that you misjudged something along the way in drawing this conclusion. Going by base rates, you might consider the proportion of terrorists in the past who concluded that their acts were for the greater good and were obviously wrong. You might also consider the fact that they were just as sure they were right as you are now, so confidence is not an antidote. Taking this information into account, you might then decide that following the simple moral rule against harming non-combatants is best.

But note carefully, you would decide this exactly because you believe that the rule would lead to better expected consequences, despite initial appearances to the contrary. No self-deception is required. (The same goes for the statistical examples. We do not need to deceive ourselves when we choose a simpler statistical procedure in order to avoid over-fitting the data.) Now note that this explanation, too, is refuted by subjects who say that their preferred option is different from the option that THEY think produces the best consequences, my second point.

5. It could be best to follow rules even when a good analysis would show that they do not lead to the best expected outcomes. Now we are back to my first point. Why is it best? Why should we follow rules that harm people more than they need to be harmed? How can we justify this?

No comments:

Post a Comment