Monday, January 10, 2011

Political extremism and my-side bias

The recent shootings in Tucson have called attention to the lack of civility in politics, and the use of intemperate rhetoric, especially by right-wing politicians and media personalities, sometimes to the point of advocating violence ("second amendment remedies" Sharron Angle). One more culprit should be mentioned: my-side bias.

My-side bias is a group of psychological processes that defend beliefs and choices against arguments on the other side. These processes include selective exposure (failing to look for information that might impugn the belief, then behaving is if no such information exists), and biased assimilation (responding to positive evidence by strengthening the belief but not also responding to negative evidence by weakening it). They lead to such phenomena as "belief overkill", in which people think that no argument exists for the other side. Thus, opponents of the health care bill are not content to point to its costs as sufficient reason to reject it; they must also convince themselves that it will not improve anyone's access to care. The same about those who oppose regulation of carbon dioxide emissions: while it might be sufficient for them to argue that the cost of regulation is too great, they also convince themselves that global warming does not exist or that people cannot control it.

People differ in the extent of these biases. At least part of the differences seem to be the result of different ideologies about how beliefs should be formed and maintained. Some people believe that self-questioning is a kind of disloyalty or betrayal. Others take the attitude of science and other modern scholarship, which is that beliefs must be open to challenge and that modifying them in response to challenges is the only way to approximate truth. (Hence the public confusion about scientists who are skeptical of what other scientists believe. People see this as a weakness rather than as a necessary part of science itself.) Some religions actually teach people not to question, and the resulting attitudes toward thought itself may help maintain adherence to these religions. Because of these differences in ideology, it may be difficult to reform education to put more emphasis on the opposite of my-side bias, which has been called "actively open-minded thinking".

The extreme example of my-side bias is paranoia. Paranoid thinking seeks evidence for delusional beliefs and rejects evidence against them. The extremeness of paranoid thinking may not be a category, distinct from how many "sane" people think. It may just be one end of a continuum. Closer to the extreme are those commentators and politicians who are now being discussed. Thus my-side bias is at work even when we can write off assassins as mad.

I agree with those commentators who say that this paranoid style is not always limited to the political right, although it does seem to be there now. I remember in the 1960s when many people I knew, mostly students, thought that a left-wing revolution in the U.S. was immanent. I thought to myself, "Don't these people ever get out of their holes and go talk to their relatives and neighbors back home?" They may have even done so, but they didn't listen or hear. I now think the same about whether opponents of the health-care bill ever talk to anyone who is excluded from health care because of a pre-existing condition.

A good bumper sticker for the times: "Don't believe everything you think."

Wednesday, January 5, 2011

Moral rules

Bennis, Medin, and Bartels have an article on "The costs and benefits of calculation and moral rules" in Perspectives in Psychological Science (vol. 5, no. 2, 2010), which I just became aware of. Bazerman and Green have a nice reply in the same issue. I want to say a few things that I think were not in the reply.

The paper concerns demonstrations such as omission bias, in which subjects in experiments using hypothetical cases often prefer harmful omissions to less harmful acts. The claim of these demonstrations is that subjects are making an error. In particular, in terms of consequences, the kinds of judgments they make would lead to more harm. An example is opposing a vaccination, even though the side effects of the vaccine are much less likely and no more serious than the disease it prevents. There are other examples.

The paper seems to alternate among several different claims, some of which are more clearly stated than others:

1. There is nothing erroneous about this judgment, even though in the real world it can lead to more harm. If it follows some moral principle, such as "do no harm", then it is right. This claim is not defended, and a defense of it would require an explicit rejection of arguments that I and others (such as Richard Hare) have made for a utilitarian standard for judgments. Nobody seems to bother with these arguments.

It does not suffice to point out that many people disagree with my conclusion. Academic debate is not done by opinion polls.

Moreover, as I have argued, even if, in some sense, it is morally right to make choices that lead to worse outcomes, the study of these judgments may still help us understand why outcomes are not as good as they could be, especially judgments that affect public policy.

2. The judgment is correct because it takes into account various things that were not in the scenario. It is thus correct in that it would NOT lead to worse consequences, once we fill in the details and consider all the consequences. (If it still leads to worse consequences even when these other things are considered, then we are back at my first point.) There are several findings that dispute this claim. In particular, many studies have asked subjects not just what they would do or should do but also what would produce the best consequences all things considered (e.g., Baron and Jurney, 1993; Baron, 1995; and several other papers). In these studies people admit that their choices do not produce the best consequences.

It is also possible that these extra factors are post-hoc rationalizations brought in to justify the initial judgment. But this has not been shown.

3. The judgments may be incorrect, but it would be too difficult to go through the "calculations" required for correct judgments. Although it is true that calculations are sometimes required, and sometimes people do not have access to their results, the scenarios under study do not require any calculation beyond seeing which of two numbers is larger, and sometimes not even that. Thus, we would expect that, in situations in which calculations are done FOR the decision maker, she would still ignore them, just as legislators often ignore the results of economic analysis or empirical research.

4. Sometimes it is best to follow simple moral rules even when it appears that breaking the rule will lead to better consequences. The paper cites examples from statistical judgment, and it is correct there. It can also be correct in the moral domain. Specifically, a good decision analysis will take into account the possibility that the simplest analysis is wrong. It will look at the effects of possible errors. Similarly, if you conclude that an act of terrorism is for the greater good, a thorough analysis of this decision would require that you consider the possibility that you misjudged something along the way in drawing this conclusion. Going by base rates, you might consider the proportion of terrorists in the past who concluded that their acts were for the greater good and were obviously wrong. You might also consider the fact that they were just as sure they were right as you are now, so confidence is not an antidote. Taking this information into account, you might then decide that following the simple moral rule against harming non-combatants is best.

But note carefully, you would decide this exactly because you believe that the rule would lead to better expected consequences, despite initial appearances to the contrary. No self-deception is required. (The same goes for the statistical examples. We do not need to deceive ourselves when we choose a simpler statistical procedure in order to avoid over-fitting the data.) Now note that this explanation, too, is refuted by subjects who say that their preferred option is different from the option that THEY think produces the best consequences, my second point.

5. It could be best to follow rules even when a good analysis would show that they do not lead to the best expected outcomes. Now we are back to my first point. Why is it best? Why should we follow rules that harm people more than they need to be harmed? How can we justify this?