In the current (March 22?) issue of Psychological Science, DeScioli et al. report a nice demonstration of how people take punishment into account in choosing how they will go about hurting someone else.*
In the main study, subjects could divide a dollar as (90,10), (10,90), or (85,0). The first number represents the divider's outcome, and the second represents another person's outcome, in cents. The (85,0) condition was the result of letting a timer run out, hence doing nothing. The (90,10) and (10,90) conditions were explicit choices.
The main result was the subjects often chose (90,10) when they did not expect to be punished by anyone, but they chose (85,0) much more when they knew that they could be punished by a third party.
The authors explain this result in terms of the difference between acts and omissions. They consider the (85,0) response to be an omission. Punishment for (85,0) was (demonstrably) less than for (90,10), and third-party punishers did think that the (90,10) option was worse. (Interestingly, the victims did not make a distinction, but the paper correctly points out that they may have been focusing more on outcomes than on the morality of the choice; such a focus would be consistent with the findings of Cushman et al. (2009).)
They go on to conclude that the bias toward harmful omissions over equally (or less) harmful acts could result from anticipation of punishment and thus be, in some sense, rational.
I have several problems with the conclusions. First, the (85,0) option probably differs from the (90,10) option not just in being an omission but also in being seen as less intentional. Letting a timer time out could result from inattention, confusion, or difficulty in deciding what to do. To the extent to which a harmful choice is unintentional, it should not be punished as severely (by any account, including the utilitarian accounts that DeScioli et al. dispute -- it is, after all, less deterrable if it is an accident). Thus, the choice of letting the timer time out could be understood as obfuscating the intent to gain at someone else's expense, rather than as an omission. The experiments had no manipulation check for equality of intention. By contrast, Spranca et al. (1991), which DeScioli et al. cite as an example of what they are going against, took great pains to show that intention was equated between acts and omissions, so that they could truly conclude that the perceived moral difference between acts and omissions could not be explained in terms of a normatively correct distinction based on intention. Thus, the experiments may have nothing to do with the act/omission distinction in its pure form, with intention equated. When I have made normative arguments about the moral relevance of the act-omission distinction, the pure form is the only relevant one. We know nothing, yet, about whether the distinction that people make in this form is influenced by anticipation of punishment.
On the other hand, the claim that omission bias in general could be explained by anticipation of punishment is roundly refuted by many examples, including those cited by DeScioli et al., yet ignored in their discussion. For example, several of those papers show omission bias (my word for the distinction) in vaccination decisions, where, in fact, punishment is more likely for the omission, not vaccinating, than the act. In other cases, it is clear that punishment is out of the question because the intent to do harm would not be detected, and the possibility of punishment is not mentioned (just as in the control condition of DeScioli et al.).
Note that DeScioli et al. cannot test for a bias in the absence of punishment, because their "omission" condition (85,0) is actually worse for both people affected, thus failing to equate the two conditions.
The penultimate paragraph says: "Our experiments are relevant to a broader issue about how traditional normative theories are used in psychology. Previous work labeled the omission effect as a bias because people’s decisions violated normative theories. Although normative theories can be useful for applications such as policy making, the present work illustrates an important limitation. By measuring performance against normative theories, researchers misleadingly label strategic decision making—choosing in a way that takes into account how other people will respond—as error (Cosmides & Tooby, 1994; DeScioli & Kurzban, in press). This mischaracterization can preclude deeper investigation into the highly organized mental processes that regulate decisions in strategic environments."
Now let us suppose (putting aside the above problems) that the paper had supported its empirical claim that the distinction between acts and omissions in choice was entirely the result of differences in anticipated punishment. So there is no bias in choice. But there is still a bias in third-party judgments. Can those too be explained as strategic? I don't see how, without assuming some bias somewhere in the system.
*I am commenting here because Psychological Science does not publish critiques of this sort.
Tuesday, March 22, 2011
Subscribe to:
Posts (Atom)