This column by Thomas Friedman reminded me of an issue that has been on my mind for a while, the inter-relatedness of world problems. I discussed it in the book for which this blog is named, especially the chapter on population.
The world today has a long list of inter-related problems: food, fresh water, energy production, biodiversity, rising oceans from higher temperatures (resulting in shrinking coastal lands where many people live), health, unemployment of the young, catastrophic risks from technology, armed conflict, poverty, over-population, and antagonism against migrants. These problems compete. Efforts to reduce global warming with clean energy require land for wind farms and solar. Biofuels compete with agriculture. Rising oceans lead to migration.
Many people concerned with these problems view them in isolation. Some groups are concerned with food. Others with biodiversity. Others with energy. Here is an example.
This isolation of problems closes off the possibility of solving some problems as a side-effect of solving others. It may be more efficient to solve the problem of immigration than to prevent the rise of the oceans. It may be easier to reduce the growth of population than to exploit more energy, after some point in doing the latter. The idea that each problem requires its own solution is much like the fixed-pie bias in negotiation, where people tend to negotiate one issue at a time rather than look for trade-offs among issues.
When problems are isolated, the focus on one makes all the others seem uncontrollable, out of the picture. Discussions of food and water always begin with the point that population growth is going to cause problems, but then they view population growth as something that is uncontrollable.
In fact, it would be relatively cheap to simply meed women's unmet demand for birth control, as a start. Other steps, only slightly more costly, such as improving the education of girls, would reduce desired family size. Part of the problem is religion. It is opposition from the religious right that prevents the U.S. government from spending more on international family planning, for example. The idea of promoting birth control has become so politically incorrect that even the organizations that do it, such as Engender health, do not emphasize it in public.
This is just one example. When problems are interrelated, the solution to any one of them helps all the others, and the search for efficient solutions should take this into account.
Thursday, June 9, 2011
Friday, April 8, 2011
Abortion
Political disputes about abortion usually involve repetitions of bad arguments, empty slogans, and upsetting images. The assumption seems to be that reason is irrelevant and the important thing is to motivate those who are already convinced. It is as if everyone has accepted the theory that moral reasoning is post-hoc rationalization and that moral disputes in are intellectually no different from sports events in which fans cheer for one side or the other.
Here is a very brief summary of the argument for abortion, drawing heavily on the work of Peter Singer (especially in "Practical ethics"). The point is that reasoning is relevant. (I discuss relevant issues at greater length elsewhere, particularly here.)
Abortion is indeed killing, but that does not settle the issue. Nor does it settle the issue to say that the fetus is "human", since this still begs the question of when and why it is wrong to kill a human. The following reasons come to mind:
First, killing a fetus is a harm to the parents if the fetus is wanted. This is an issue only if the abortion is disputed by (for example) the father. Usually this is not an issue, and is surely not the issue that riles the anti-abortion movement. When relevant, it is a family dispute.
Second, the means of death can be painful. This is possibly an issue for late-term abortions, when the pain system is developed. The solution would be anesthesia for the fetus. The same argument would apply to the killing of animals. This too seems beside the point most of the time.
Third, abortion prevents a stream of future experiences for the person who would be born. On balance these experiences will probably be positive, relative to not having them at all. This argument applies to animals as well as people. It applies to any choice that prevents a person (or animal) from existing, not just abortion but also birth control and abstinence from sex. Carried to the limit, it would amount to a command to "be fruitful and multiply", until we reach the point where the world is so crowded that the totality of negative experiences resulting from an additional person was as great as the totality of positive ones. (Derek Parfit discusses this issue at length in "Reasons and persons".)
Although this argument is interesting, I do not see why we should accept it. If we go one step back and ask why experiences are valued, we find that they are valued because people want them. That is, people have goals or wants for having good experiences (and, presumably, so do animals). So, when we create a person, we are in essence creating goals (or wants) and then satisfying them. But, if the goals do not yet exist, why is it a requirement of morality or rationality to create them just so that they could be satisfied.
For example, why are you obliged to inculcate in me a taste for contemporary pop music? Even if it is true that, once I had the goal of listing to pop music, I would get positive experiences from listing to such music, it is possible that I do not want that goal. It might be inconsistent with my other goals. The Golden Rule thus implies that you have no particular obligation to create goals in me, because I may not want those goals. Nor is it necessarily rational for me to create such a goal or taste in myself. Whether I should do this depends on my other goals.
We have no moral obligation to create beings just so that we can satisfy the goals that come into existence. The Golden Rule does not apply here because the "others" in "do unto others" do not exist. It is the goals of those who exist that determine whether it is rational or moral to create new goals.
A final issue is potential. It is true that a fetus is a potential adult human. So is every sperm-egg pair, and it is hard to see why their physical joining together is relevant to the argument concerning potential. It is just a salient step in the pathway. But the argument from potential raises the same questions as argument from experience. It is not clear why it is moral or rational to create new people and new goals, if doing so is inconsistent with our current goals.
Some of our current goals, in fact, may imply that limiting births is a good thing. We want humans to have good lives once they come to exist. They have goals and wants then. (Of course, we also want enough of them to insure the long-term survival of humanity, but, arguably, long-term survival is more likely if the rate of population growth is slower than it is now.) We want particular children to have good lives. If we are going to limit family size, then we want to time the bearing of children so that they will be maximally wanted when they arrive, and maximally likely to develop well.
If this sounds like an argument for "abortion as a method of birth control", it is. But it does not imply that abortion is just as good as any other method of birth control. Clearly abortion has many disadvantages, including emotional effects. But these do not make it worse than no birth control at all. And often, as in the case of fetuses with serious genetic impairments, pregnancy complications, or failure of other methods, abortion is not the method of choice, but a fall-back.
Here is a very brief summary of the argument for abortion, drawing heavily on the work of Peter Singer (especially in "Practical ethics"). The point is that reasoning is relevant. (I discuss relevant issues at greater length elsewhere, particularly here.)
Abortion is indeed killing, but that does not settle the issue. Nor does it settle the issue to say that the fetus is "human", since this still begs the question of when and why it is wrong to kill a human. The following reasons come to mind:
First, killing a fetus is a harm to the parents if the fetus is wanted. This is an issue only if the abortion is disputed by (for example) the father. Usually this is not an issue, and is surely not the issue that riles the anti-abortion movement. When relevant, it is a family dispute.
Second, the means of death can be painful. This is possibly an issue for late-term abortions, when the pain system is developed. The solution would be anesthesia for the fetus. The same argument would apply to the killing of animals. This too seems beside the point most of the time.
Third, abortion prevents a stream of future experiences for the person who would be born. On balance these experiences will probably be positive, relative to not having them at all. This argument applies to animals as well as people. It applies to any choice that prevents a person (or animal) from existing, not just abortion but also birth control and abstinence from sex. Carried to the limit, it would amount to a command to "be fruitful and multiply", until we reach the point where the world is so crowded that the totality of negative experiences resulting from an additional person was as great as the totality of positive ones. (Derek Parfit discusses this issue at length in "Reasons and persons".)
Although this argument is interesting, I do not see why we should accept it. If we go one step back and ask why experiences are valued, we find that they are valued because people want them. That is, people have goals or wants for having good experiences (and, presumably, so do animals). So, when we create a person, we are in essence creating goals (or wants) and then satisfying them. But, if the goals do not yet exist, why is it a requirement of morality or rationality to create them just so that they could be satisfied.
For example, why are you obliged to inculcate in me a taste for contemporary pop music? Even if it is true that, once I had the goal of listing to pop music, I would get positive experiences from listing to such music, it is possible that I do not want that goal. It might be inconsistent with my other goals. The Golden Rule thus implies that you have no particular obligation to create goals in me, because I may not want those goals. Nor is it necessarily rational for me to create such a goal or taste in myself. Whether I should do this depends on my other goals.
We have no moral obligation to create beings just so that we can satisfy the goals that come into existence. The Golden Rule does not apply here because the "others" in "do unto others" do not exist. It is the goals of those who exist that determine whether it is rational or moral to create new goals.
A final issue is potential. It is true that a fetus is a potential adult human. So is every sperm-egg pair, and it is hard to see why their physical joining together is relevant to the argument concerning potential. It is just a salient step in the pathway. But the argument from potential raises the same questions as argument from experience. It is not clear why it is moral or rational to create new people and new goals, if doing so is inconsistent with our current goals.
Some of our current goals, in fact, may imply that limiting births is a good thing. We want humans to have good lives once they come to exist. They have goals and wants then. (Of course, we also want enough of them to insure the long-term survival of humanity, but, arguably, long-term survival is more likely if the rate of population growth is slower than it is now.) We want particular children to have good lives. If we are going to limit family size, then we want to time the bearing of children so that they will be maximally wanted when they arrive, and maximally likely to develop well.
If this sounds like an argument for "abortion as a method of birth control", it is. But it does not imply that abortion is just as good as any other method of birth control. Clearly abortion has many disadvantages, including emotional effects. But these do not make it worse than no birth control at all. And often, as in the case of fetuses with serious genetic impairments, pregnancy complications, or failure of other methods, abortion is not the method of choice, but a fall-back.
Tuesday, March 22, 2011
Is the act-omission distinction strategic? Comment on DeScioli et al. (2011).
In the current (March 22?) issue of Psychological Science, DeScioli et al. report a nice demonstration of how people take punishment into account in choosing how they will go about hurting someone else.*
In the main study, subjects could divide a dollar as (90,10), (10,90), or (85,0). The first number represents the divider's outcome, and the second represents another person's outcome, in cents. The (85,0) condition was the result of letting a timer run out, hence doing nothing. The (90,10) and (10,90) conditions were explicit choices.
The main result was the subjects often chose (90,10) when they did not expect to be punished by anyone, but they chose (85,0) much more when they knew that they could be punished by a third party.
The authors explain this result in terms of the difference between acts and omissions. They consider the (85,0) response to be an omission. Punishment for (85,0) was (demonstrably) less than for (90,10), and third-party punishers did think that the (90,10) option was worse. (Interestingly, the victims did not make a distinction, but the paper correctly points out that they may have been focusing more on outcomes than on the morality of the choice; such a focus would be consistent with the findings of Cushman et al. (2009).)
They go on to conclude that the bias toward harmful omissions over equally (or less) harmful acts could result from anticipation of punishment and thus be, in some sense, rational.
I have several problems with the conclusions. First, the (85,0) option probably differs from the (90,10) option not just in being an omission but also in being seen as less intentional. Letting a timer time out could result from inattention, confusion, or difficulty in deciding what to do. To the extent to which a harmful choice is unintentional, it should not be punished as severely (by any account, including the utilitarian accounts that DeScioli et al. dispute -- it is, after all, less deterrable if it is an accident). Thus, the choice of letting the timer time out could be understood as obfuscating the intent to gain at someone else's expense, rather than as an omission. The experiments had no manipulation check for equality of intention. By contrast, Spranca et al. (1991), which DeScioli et al. cite as an example of what they are going against, took great pains to show that intention was equated between acts and omissions, so that they could truly conclude that the perceived moral difference between acts and omissions could not be explained in terms of a normatively correct distinction based on intention. Thus, the experiments may have nothing to do with the act/omission distinction in its pure form, with intention equated. When I have made normative arguments about the moral relevance of the act-omission distinction, the pure form is the only relevant one. We know nothing, yet, about whether the distinction that people make in this form is influenced by anticipation of punishment.
On the other hand, the claim that omission bias in general could be explained by anticipation of punishment is roundly refuted by many examples, including those cited by DeScioli et al., yet ignored in their discussion. For example, several of those papers show omission bias (my word for the distinction) in vaccination decisions, where, in fact, punishment is more likely for the omission, not vaccinating, than the act. In other cases, it is clear that punishment is out of the question because the intent to do harm would not be detected, and the possibility of punishment is not mentioned (just as in the control condition of DeScioli et al.).
Note that DeScioli et al. cannot test for a bias in the absence of punishment, because their "omission" condition (85,0) is actually worse for both people affected, thus failing to equate the two conditions.
The penultimate paragraph says: "Our experiments are relevant to a broader issue about how traditional normative theories are used in psychology. Previous work labeled the omission effect as a bias because people’s decisions violated normative theories. Although normative theories can be useful for applications such as policy making, the present work illustrates an important limitation. By measuring performance against normative theories, researchers misleadingly label strategic decision making—choosing in a way that takes into account how other people will respond—as error (Cosmides & Tooby, 1994; DeScioli & Kurzban, in press). This mischaracterization can preclude deeper investigation into the highly organized mental processes that regulate decisions in strategic environments."
Now let us suppose (putting aside the above problems) that the paper had supported its empirical claim that the distinction between acts and omissions in choice was entirely the result of differences in anticipated punishment. So there is no bias in choice. But there is still a bias in third-party judgments. Can those too be explained as strategic? I don't see how, without assuming some bias somewhere in the system.
*I am commenting here because Psychological Science does not publish critiques of this sort.
In the main study, subjects could divide a dollar as (90,10), (10,90), or (85,0). The first number represents the divider's outcome, and the second represents another person's outcome, in cents. The (85,0) condition was the result of letting a timer run out, hence doing nothing. The (90,10) and (10,90) conditions were explicit choices.
The main result was the subjects often chose (90,10) when they did not expect to be punished by anyone, but they chose (85,0) much more when they knew that they could be punished by a third party.
The authors explain this result in terms of the difference between acts and omissions. They consider the (85,0) response to be an omission. Punishment for (85,0) was (demonstrably) less than for (90,10), and third-party punishers did think that the (90,10) option was worse. (Interestingly, the victims did not make a distinction, but the paper correctly points out that they may have been focusing more on outcomes than on the morality of the choice; such a focus would be consistent with the findings of Cushman et al. (2009).)
They go on to conclude that the bias toward harmful omissions over equally (or less) harmful acts could result from anticipation of punishment and thus be, in some sense, rational.
I have several problems with the conclusions. First, the (85,0) option probably differs from the (90,10) option not just in being an omission but also in being seen as less intentional. Letting a timer time out could result from inattention, confusion, or difficulty in deciding what to do. To the extent to which a harmful choice is unintentional, it should not be punished as severely (by any account, including the utilitarian accounts that DeScioli et al. dispute -- it is, after all, less deterrable if it is an accident). Thus, the choice of letting the timer time out could be understood as obfuscating the intent to gain at someone else's expense, rather than as an omission. The experiments had no manipulation check for equality of intention. By contrast, Spranca et al. (1991), which DeScioli et al. cite as an example of what they are going against, took great pains to show that intention was equated between acts and omissions, so that they could truly conclude that the perceived moral difference between acts and omissions could not be explained in terms of a normatively correct distinction based on intention. Thus, the experiments may have nothing to do with the act/omission distinction in its pure form, with intention equated. When I have made normative arguments about the moral relevance of the act-omission distinction, the pure form is the only relevant one. We know nothing, yet, about whether the distinction that people make in this form is influenced by anticipation of punishment.
On the other hand, the claim that omission bias in general could be explained by anticipation of punishment is roundly refuted by many examples, including those cited by DeScioli et al., yet ignored in their discussion. For example, several of those papers show omission bias (my word for the distinction) in vaccination decisions, where, in fact, punishment is more likely for the omission, not vaccinating, than the act. In other cases, it is clear that punishment is out of the question because the intent to do harm would not be detected, and the possibility of punishment is not mentioned (just as in the control condition of DeScioli et al.).
Note that DeScioli et al. cannot test for a bias in the absence of punishment, because their "omission" condition (85,0) is actually worse for both people affected, thus failing to equate the two conditions.
The penultimate paragraph says: "Our experiments are relevant to a broader issue about how traditional normative theories are used in psychology. Previous work labeled the omission effect as a bias because people’s decisions violated normative theories. Although normative theories can be useful for applications such as policy making, the present work illustrates an important limitation. By measuring performance against normative theories, researchers misleadingly label strategic decision making—choosing in a way that takes into account how other people will respond—as error (Cosmides & Tooby, 1994; DeScioli & Kurzban, in press). This mischaracterization can preclude deeper investigation into the highly organized mental processes that regulate decisions in strategic environments."
Now let us suppose (putting aside the above problems) that the paper had supported its empirical claim that the distinction between acts and omissions in choice was entirely the result of differences in anticipated punishment. So there is no bias in choice. But there is still a bias in third-party judgments. Can those too be explained as strategic? I don't see how, without assuming some bias somewhere in the system.
*I am commenting here because Psychological Science does not publish critiques of this sort.
Monday, January 10, 2011
Political extremism and my-side bias
The recent shootings in Tucson have called attention to the lack of civility in politics, and the use of intemperate rhetoric, especially by right-wing politicians and media personalities, sometimes to the point of advocating violence ("second amendment remedies" Sharron Angle). One more culprit should be mentioned: my-side bias.
My-side bias is a group of psychological processes that defend beliefs and choices against arguments on the other side. These processes include selective exposure (failing to look for information that might impugn the belief, then behaving is if no such information exists), and biased assimilation (responding to positive evidence by strengthening the belief but not also responding to negative evidence by weakening it). They lead to such phenomena as "belief overkill", in which people think that no argument exists for the other side. Thus, opponents of the health care bill are not content to point to its costs as sufficient reason to reject it; they must also convince themselves that it will not improve anyone's access to care. The same about those who oppose regulation of carbon dioxide emissions: while it might be sufficient for them to argue that the cost of regulation is too great, they also convince themselves that global warming does not exist or that people cannot control it.
People differ in the extent of these biases. At least part of the differences seem to be the result of different ideologies about how beliefs should be formed and maintained. Some people believe that self-questioning is a kind of disloyalty or betrayal. Others take the attitude of science and other modern scholarship, which is that beliefs must be open to challenge and that modifying them in response to challenges is the only way to approximate truth. (Hence the public confusion about scientists who are skeptical of what other scientists believe. People see this as a weakness rather than as a necessary part of science itself.) Some religions actually teach people not to question, and the resulting attitudes toward thought itself may help maintain adherence to these religions. Because of these differences in ideology, it may be difficult to reform education to put more emphasis on the opposite of my-side bias, which has been called "actively open-minded thinking".
The extreme example of my-side bias is paranoia. Paranoid thinking seeks evidence for delusional beliefs and rejects evidence against them. The extremeness of paranoid thinking may not be a category, distinct from how many "sane" people think. It may just be one end of a continuum. Closer to the extreme are those commentators and politicians who are now being discussed. Thus my-side bias is at work even when we can write off assassins as mad.
I agree with those commentators who say that this paranoid style is not always limited to the political right, although it does seem to be there now. I remember in the 1960s when many people I knew, mostly students, thought that a left-wing revolution in the U.S. was immanent. I thought to myself, "Don't these people ever get out of their holes and go talk to their relatives and neighbors back home?" They may have even done so, but they didn't listen or hear. I now think the same about whether opponents of the health-care bill ever talk to anyone who is excluded from health care because of a pre-existing condition.
A good bumper sticker for the times: "Don't believe everything you think."
My-side bias is a group of psychological processes that defend beliefs and choices against arguments on the other side. These processes include selective exposure (failing to look for information that might impugn the belief, then behaving is if no such information exists), and biased assimilation (responding to positive evidence by strengthening the belief but not also responding to negative evidence by weakening it). They lead to such phenomena as "belief overkill", in which people think that no argument exists for the other side. Thus, opponents of the health care bill are not content to point to its costs as sufficient reason to reject it; they must also convince themselves that it will not improve anyone's access to care. The same about those who oppose regulation of carbon dioxide emissions: while it might be sufficient for them to argue that the cost of regulation is too great, they also convince themselves that global warming does not exist or that people cannot control it.
People differ in the extent of these biases. At least part of the differences seem to be the result of different ideologies about how beliefs should be formed and maintained. Some people believe that self-questioning is a kind of disloyalty or betrayal. Others take the attitude of science and other modern scholarship, which is that beliefs must be open to challenge and that modifying them in response to challenges is the only way to approximate truth. (Hence the public confusion about scientists who are skeptical of what other scientists believe. People see this as a weakness rather than as a necessary part of science itself.) Some religions actually teach people not to question, and the resulting attitudes toward thought itself may help maintain adherence to these religions. Because of these differences in ideology, it may be difficult to reform education to put more emphasis on the opposite of my-side bias, which has been called "actively open-minded thinking".
The extreme example of my-side bias is paranoia. Paranoid thinking seeks evidence for delusional beliefs and rejects evidence against them. The extremeness of paranoid thinking may not be a category, distinct from how many "sane" people think. It may just be one end of a continuum. Closer to the extreme are those commentators and politicians who are now being discussed. Thus my-side bias is at work even when we can write off assassins as mad.
I agree with those commentators who say that this paranoid style is not always limited to the political right, although it does seem to be there now. I remember in the 1960s when many people I knew, mostly students, thought that a left-wing revolution in the U.S. was immanent. I thought to myself, "Don't these people ever get out of their holes and go talk to their relatives and neighbors back home?" They may have even done so, but they didn't listen or hear. I now think the same about whether opponents of the health-care bill ever talk to anyone who is excluded from health care because of a pre-existing condition.
A good bumper sticker for the times: "Don't believe everything you think."
Wednesday, January 5, 2011
Moral rules
Bennis, Medin, and Bartels have an article on "The costs and benefits of calculation and moral rules" in Perspectives in Psychological Science (vol. 5, no. 2, 2010), which I just became aware of. Bazerman and Green have a nice reply in the same issue. I want to say a few things that I think were not in the reply.
The paper concerns demonstrations such as omission bias, in which subjects in experiments using hypothetical cases often prefer harmful omissions to less harmful acts. The claim of these demonstrations is that subjects are making an error. In particular, in terms of consequences, the kinds of judgments they make would lead to more harm. An example is opposing a vaccination, even though the side effects of the vaccine are much less likely and no more serious than the disease it prevents. There are other examples.
The paper seems to alternate among several different claims, some of which are more clearly stated than others:
1. There is nothing erroneous about this judgment, even though in the real world it can lead to more harm. If it follows some moral principle, such as "do no harm", then it is right. This claim is not defended, and a defense of it would require an explicit rejection of arguments that I and others (such as Richard Hare) have made for a utilitarian standard for judgments. Nobody seems to bother with these arguments.
It does not suffice to point out that many people disagree with my conclusion. Academic debate is not done by opinion polls.
Moreover, as I have argued, even if, in some sense, it is morally right to make choices that lead to worse outcomes, the study of these judgments may still help us understand why outcomes are not as good as they could be, especially judgments that affect public policy.
2. The judgment is correct because it takes into account various things that were not in the scenario. It is thus correct in that it would NOT lead to worse consequences, once we fill in the details and consider all the consequences. (If it still leads to worse consequences even when these other things are considered, then we are back at my first point.) There are several findings that dispute this claim. In particular, many studies have asked subjects not just what they would do or should do but also what would produce the best consequences all things considered (e.g., Baron and Jurney, 1993; Baron, 1995; and several other papers). In these studies people admit that their choices do not produce the best consequences.
It is also possible that these extra factors are post-hoc rationalizations brought in to justify the initial judgment. But this has not been shown.
3. The judgments may be incorrect, but it would be too difficult to go through the "calculations" required for correct judgments. Although it is true that calculations are sometimes required, and sometimes people do not have access to their results, the scenarios under study do not require any calculation beyond seeing which of two numbers is larger, and sometimes not even that. Thus, we would expect that, in situations in which calculations are done FOR the decision maker, she would still ignore them, just as legislators often ignore the results of economic analysis or empirical research.
4. Sometimes it is best to follow simple moral rules even when it appears that breaking the rule will lead to better consequences. The paper cites examples from statistical judgment, and it is correct there. It can also be correct in the moral domain. Specifically, a good decision analysis will take into account the possibility that the simplest analysis is wrong. It will look at the effects of possible errors. Similarly, if you conclude that an act of terrorism is for the greater good, a thorough analysis of this decision would require that you consider the possibility that you misjudged something along the way in drawing this conclusion. Going by base rates, you might consider the proportion of terrorists in the past who concluded that their acts were for the greater good and were obviously wrong. You might also consider the fact that they were just as sure they were right as you are now, so confidence is not an antidote. Taking this information into account, you might then decide that following the simple moral rule against harming non-combatants is best.
But note carefully, you would decide this exactly because you believe that the rule would lead to better expected consequences, despite initial appearances to the contrary. No self-deception is required. (The same goes for the statistical examples. We do not need to deceive ourselves when we choose a simpler statistical procedure in order to avoid over-fitting the data.) Now note that this explanation, too, is refuted by subjects who say that their preferred option is different from the option that THEY think produces the best consequences, my second point.
5. It could be best to follow rules even when a good analysis would show that they do not lead to the best expected outcomes. Now we are back to my first point. Why is it best? Why should we follow rules that harm people more than they need to be harmed? How can we justify this?
The paper concerns demonstrations such as omission bias, in which subjects in experiments using hypothetical cases often prefer harmful omissions to less harmful acts. The claim of these demonstrations is that subjects are making an error. In particular, in terms of consequences, the kinds of judgments they make would lead to more harm. An example is opposing a vaccination, even though the side effects of the vaccine are much less likely and no more serious than the disease it prevents. There are other examples.
The paper seems to alternate among several different claims, some of which are more clearly stated than others:
1. There is nothing erroneous about this judgment, even though in the real world it can lead to more harm. If it follows some moral principle, such as "do no harm", then it is right. This claim is not defended, and a defense of it would require an explicit rejection of arguments that I and others (such as Richard Hare) have made for a utilitarian standard for judgments. Nobody seems to bother with these arguments.
It does not suffice to point out that many people disagree with my conclusion. Academic debate is not done by opinion polls.
Moreover, as I have argued, even if, in some sense, it is morally right to make choices that lead to worse outcomes, the study of these judgments may still help us understand why outcomes are not as good as they could be, especially judgments that affect public policy.
2. The judgment is correct because it takes into account various things that were not in the scenario. It is thus correct in that it would NOT lead to worse consequences, once we fill in the details and consider all the consequences. (If it still leads to worse consequences even when these other things are considered, then we are back at my first point.) There are several findings that dispute this claim. In particular, many studies have asked subjects not just what they would do or should do but also what would produce the best consequences all things considered (e.g., Baron and Jurney, 1993; Baron, 1995; and several other papers). In these studies people admit that their choices do not produce the best consequences.
It is also possible that these extra factors are post-hoc rationalizations brought in to justify the initial judgment. But this has not been shown.
3. The judgments may be incorrect, but it would be too difficult to go through the "calculations" required for correct judgments. Although it is true that calculations are sometimes required, and sometimes people do not have access to their results, the scenarios under study do not require any calculation beyond seeing which of two numbers is larger, and sometimes not even that. Thus, we would expect that, in situations in which calculations are done FOR the decision maker, she would still ignore them, just as legislators often ignore the results of economic analysis or empirical research.
4. Sometimes it is best to follow simple moral rules even when it appears that breaking the rule will lead to better consequences. The paper cites examples from statistical judgment, and it is correct there. It can also be correct in the moral domain. Specifically, a good decision analysis will take into account the possibility that the simplest analysis is wrong. It will look at the effects of possible errors. Similarly, if you conclude that an act of terrorism is for the greater good, a thorough analysis of this decision would require that you consider the possibility that you misjudged something along the way in drawing this conclusion. Going by base rates, you might consider the proportion of terrorists in the past who concluded that their acts were for the greater good and were obviously wrong. You might also consider the fact that they were just as sure they were right as you are now, so confidence is not an antidote. Taking this information into account, you might then decide that following the simple moral rule against harming non-combatants is best.
But note carefully, you would decide this exactly because you believe that the rule would lead to better expected consequences, despite initial appearances to the contrary. No self-deception is required. (The same goes for the statistical examples. We do not need to deceive ourselves when we choose a simpler statistical procedure in order to avoid over-fitting the data.) Now note that this explanation, too, is refuted by subjects who say that their preferred option is different from the option that THEY think produces the best consequences, my second point.
5. It could be best to follow rules even when a good analysis would show that they do not lead to the best expected outcomes. Now we are back to my first point. Why is it best? Why should we follow rules that harm people more than they need to be harmed? How can we justify this?
Subscribe to:
Posts (Atom)