Tuesday, February 28, 2017

Explanations of deontological responses to moral dilemmas

Hundreds of experiments have now shown, in various ways, that responses to moral dilemmas often follow deontological rules rather than utilitarian theory. Deontological rules are rules that indicate whether some category of actions is required, permissible, or forbidden. Utilitarianism says that the best choice among those under consideration is that one that does the most expected good for all those affected. For example, utilitarianism implies that it is better to kill one person to save five others than not to kill (other things being equal), while some deontological rule may say that active killing is forbidden, whatever the consequences.

In many of these experiments, deontological responses (DRs) seem to be equivalent to responses that demonstrate cognitive biases in non-moral situations. For example, the omission bias favors harms of omission over less harmful harms caused by acts, in both moral and non-moral situations (Ritov & Baron, 1990). This similarity suggests that the DRs arise from some sort of error, or poor thinking. Much evidence indicates that the cognitive processes supporting moral and non-moral judgments are largely the same (e.g., Greene, 2007). If this is true, the question arises of what sort of thinking is involved, and when it occurs. Several (mutually consistent) possibilities have been suggested:

1. Dual-system theory in its simplest form ("default interventionist" or "sequential") says that DRs arise largely as an immediate intuitive response to a dilemma presented in an experiment, once the dilemma is understood. Then, sometimes, the subject may question the initial intuition and wind up giving the utilitarian response as a result of a second step of reflective thought. The same two-step sequences has been argued to account for many other errors in reasoning, including errors in arithmetic, problem solving, and logic. By this view, the cognitive problem that produces DRs is a failure to check, a failure to get to the second step before responding. This dual-system view has been popularized by Daniel Kahneman in his book "Thinking, fast and slow". I have provided evidence that it is largely incorrect (Baron & Gürçay, 2016).

2. Very similar to this sequential dual-system theory, but different, is the theory of actively open-minded thinking (AOT; Baron, 1995). AOT begins from a view of thinking as search and inference. We search for possible answers to the question at hand, arguments or evidence for or against one possible answer or another, and criteria or values to apply when we evaluate the relative strengths of the answers in view of the arguments at hand. AOT avoids errors in thinking by searching for alternative possibilities, and for arguments and goals that might lead to a higher evaluation of possibile answers other than those that are already strong. By this view, the main source of errors is that thinking is insufficiently self-critical; the thinker looks for support for possibilities that are already strong and fails to look for support for alternatives. In the case of moral dilemmas, the DRs would be those that are already strong at the outset of thinking and would not be subject to sufficient questioning, even though additional thinking may proceed to bolster these responses. The main difference between this view and the sequential dual-system view is that AOT is concerned with the direction of thinking, not the extent of it, although of course there must be some minimal extent if self-criticism is to occur. AOT also defines direction as a continuous quantity, so it does not assume all-or-none "reflection or no reflection". By this account, utilitarian and deontological responses need not differ in the amount of time or effort required for them. Bolstering and questioning need not differ in either direction, in their processing demands.

3. A developmental view extends the AOT view to what happens outside of the experiment (Baron, 2011). Moral principles develop over many years, and they may change as a result of questioning and external challenges. DRs may arise early in development, but that may also depend on the child's environment, how morality is taught. Reflection may lead to increasingly utilitarian views as people question the justification of DRs, especially in cases where following these DRs leads to obviously harmful outcomes. When subjects are faced with moral dilemmas in experiments, they largely apply the principles that they have previously developed, which may be utilitarian, deontological or (most often) both.

4. We can replace "development of the individual" with "social evolution of culture" (Baron, in press). Historically, morality may not have been distinguished from formal law until relatively recently. Law takes the form of DRs. Cultural views persist, historically, even when some people have replaced them with other ways of thinking. Kohlberg has suggest that this sequence happens in development, where the distinction between morality and law is made fairly late. Thus, the course of individual development may to some extent recapitulate the history of cultures.

These alternatives have somewhat different implications for the question of how to make people more utilitarian, if that is what we want to do. (I do.) But the implications are not that different. A view that is consistent with all of them is to emphasize reflective moral education, presenting arguments for and against utilitarian solutions, and encouraging students to think of such arguments themselves (Baron, 1990).

Recently I and others have written several articles criticizing the sequential dual-system view of moral judgment and other tasks, such as problem solving in logic and mathematics (e.g., Baron & Gürçay, 2016; Pennycook et al., 2014). I think it is apparent that, at least in the moral domain, the role of different mechanisms is not a big deal. All these views are consistent with the more general claim that DRs can be understood as errors, and that they need not be seen as "hard wired", but, rather, malleable.

References

Baron, J. (1990). Thinking about consequences. Journal of Moral Education, 19, 77–87.

Baron, J. (1995). Myside bias in thinking about abortion. Thinking and Reasoning, 1, 221–235.

Baron, J. (2011). Where do non-utilitarian moral rules come from? In J. I. Krueger and E. T. Higgins (Eds.) Social judgment and decision making, pp. 261–278. New York: Psychology Press.

Baron, J. (in press). Utilitarian vs. deontological reasoning: method, results, and theory. In J.-F. Bonnefon & B. Trémolière (forthcoming). Moral inferences. Hove, UK: Psychology Press.

Baron, J. & Gürçay, B. (2016). A meta-analysis of response-time tests of the sequential two-systems model of moral judgment. Memory and Cognition. doi:10.3758/s13421-016-0686-8

Greene, J. D. (2007). The secret joke of Kant’s soul, in W. Sinnott-Armstrong, Ed., Moral psychology, Vol. 3: The neuroscience of morality: Emotion, disease, and development, pp. 36–79. MIT Press, Cambridge, MA.

Pennycook, G., Trippas, D., Handley, S. J., & Thompson, V. A. (2014). Base-rates: Both neglected and intuitive. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40, 544--554.

Ritov, I., & Baron, J. (1990). Reluctance to vaccinate: omission bias and ambiguity. Journal of Behavioral Decision Making, 3, 263–277.

Sunday, February 26, 2017

Two posts on climate and one on health insurance

The editors of RegBlog have accepted three of my recent posts. Rather than duplicate them here (which I am now allowed to do), I am instead making links to them:

How geographic boundaries determine the social cost of carbon;

The discount rate for the social cost of carbon;

Justifying health insurance.

All of these are philosophical comments about regulatory issues that are likely to be addressed by the Trump administration, the U.S. congress, and possibly the courts. But the issues will persist.

Friday, December 23, 2016

More on AOT and the election (rejected op-ed)

The people who voted for Donald Trump were, like those who voted for Hillary Clinton, a varied lot. Some were long-standing Republicans who had doubts about Trump but even more doubts about the idea of another Democratic president. Others were upset about some particular policy that has, in fact, hurt them. But it seems that a great many were older white Christians who had no college education. When I read what some of these voters said about their choice, I find much of it, to oversimplify just a little, false. Something seemed odd about the way they came to hold beliefs with apparent high confidence when these beliefs fly in the face of both good evidence and expert opinion.

If my perception is correct, then I think that psychology can both explain part of what they are doing and provide an optimistic solution for the future, if only the long-term future. Specifically, they are not thinking very well, and the kind of thinking they are not doing is influenced by schooling and culture. I call it actively open-minded thinking, or AOT. The purpose of AOT is to avoid myside bias (also called confirmation bias). Myside bias is a tendency to think in a way that favors pet beliefs: beliefs based on intuition alone, beliefs that one wants to have, or beliefs that have resulted from indoctrination. AOT counteracts myside bias by being open to alternatives and fair in the use of evidence and arguments. Beyond openness and fairness, AOT looks actively for alternatives and reasons why pet beliefs might be wrong.

AOT is the only way to have justified confidence in our beliefs. If we do not check them in this way, we cannot distinguish beliefs that deserve our confidence from those that do not. If we impose such untested beliefs on others through our political behavior, we risk hurting them, in ways that deserve their censure. We are being poor citizens.

AOT is also the basis of expertise of the sort we can trust. Scientists deserve our attention because the very nature of science, as a group activity, is to look for alternatives and evidence. Scientists get credit for poking holes in conventional scientific wisdom, leading to gradual improvement (and sometimes not so gradual). Good journalists, by checking their facts before going public, do the same. That is the difference between, well, the New York Times and fake news. None of these institutions is perfect, but they are more likely to reach good conclusions than they would reach without critical reflection. If people understand AOT, they will understand how to evaluate sources. They do not have to think through everything by themselves, because they can trust others who know how to think well, and they can recognize when this is being done.  It is thus important for people to understand AOT and why it helps, as well as being able to do it themselves, when needed.

People differ enormously in their understanding of AOT and in their tendency to do it, as opposed to displaying myside bias. For example, one manifestation of myside bias is "belief overkill", the tendency to think that all arguments point in the same direction, for example, the belief that genetically modified food has no benefits, only risks, or that it has no risks, only benefits. I have found that some people display this bias at almost every opportunity, while others do not show the effect at all -- they can recognize the down side of their opinion while still believing that the up side predominates. Similar individual differences are found in other manifestations of AOT.

Myside bias is thus not a necessary part of the human condition. Some cultural practices, such as the discouragement of certain kinds of curiosity in children, may even lead people to oppose AOT when they would otherwise do it naturally. Education in the U.S., especially college education, encourages AOT. Students who write papers that neglect arguments on the other side of their conclusion do not usually get A's, even if they write well otherwise. AOT is part of what it means to be a responsible scholar and a good student. People who oppose AOT may also stay away from higher education because they do not want to think this way. The correlation between Trump support and lack of higher education may have more than one explanation.

AOT is malleable. People can be taught to engage in it. It is part of what is called cognitive style, not the result of how the brain is "hard wired". It is related to cognitive skills, but not dependent on them. Although AOT is emphasized in college, even elementary schools encourage it, if only indirectly by teaching tolerance and openness to others who are different. Several studies have shown that it can be taught by explicit instruction, especially when the idea is explained. Understanding is important, as well as some practice combined with self-evaluation. One of the most impressive of these studies was reported in 1986 by David Perkins and his collaborators. He found that myside bias was reduced by a few weeks of explicit instruction, but not by participation in other activities such as a high-school debating team. The debating team emphasized winning the argument, as opposed to discovering the best conclusion.

AOT is not a new idea. John Stuart Mill, in "On liberty", provided perhaps its most eloquent defense, using different terms. John Dewey argued that education should teaching something like it. After World War II, psychologists tried to understand support for fascism by looking at the personality traits of its supporters. It reached similar conclusions about individual differences and the role of open-mindedness, although this tradition largely neglected the potential role of education. It is on this latter point where the most recent research provides reason for optimism, even in the darkest times.

Sunday, November 13, 2016

The kind of thinking that led to Trump's election

Donald Trump's most fervent supporters consisted of elements of the Republican coalition as it has developed over the last several years. These elements are characterized by the form of their beliefs. They confidently hold beliefs without regard to the full set of relevant arguments and evidence.

Some of their beliefs are admittedly based on "faith", which means that they are held confidently despite the absence of reasons that would matter to others who do not share their faith. These include both moral and factual beliefs: homosexuality is sinful; humans did not evolve from other animals; abortion is equivalent to murder.

Other beliefs are accepted uncritically, without considering arguments on the other side, or, worse, attributing those arguments to a conspiracy. The belief that global warming is not caused by human activity is a prime example. Conspiracy theories about collusion by scientists guarantee that all arguments on the other side can be ignored. Thus, the holders of these beliefs have no way to discover that they might be incorrect, since all counter-arguments seem to come from a diabolical conspiracy. People hold these questionable beliefs with high confidence and are thus willing to impose them on others, through their political action.

These beliefs have been supported by talk radio and Fox news. In turn, these sources are supported and encouraged by more traditional Republicans. We saw this influence earlier in the "Tea Party" movement, which feeds back into the content of these sources.

More generally, these beliefs are the result of a pattern of thinking. People who hold them are lacking in what I have called actively open-minded thinking (AOT). AOT involves self-critical reflection. In order to be confident of a belief, we must look actively for reasons why it should be rejected, modified, or held with less confidence. If beliefs go unchallenged by this sort of reflection, we have no reason to know that they deserve high confidence. We have no way to distinguish those that would survive critical reflection from those that would not.

Note that AOT is not equivalent to reflection in general. Some reflection is nothing more than bolstering, looking for reasons why we were right all along, finding ways to make everything fit with our preferred conclusion. AOT concerns the direction of reflection, not just its amount. The ultimate aim is to be fair to counter-arguments.

Nor does AOT require that we never hold beliefs with high confidence. Often, an active search can find little to say for the other side, or can lead to a modified form of the original belief that is designed to deal with problems.

AOT is built in to the scientific method. Science as an institution works to look for holes in current theories. Scientists get credit for finding such holes, and ways to plug them by modifying or discarding the theory in question. For these reasons, we should have more confidence in the conclusions of science than we have in the conclusions of ways of thinking that do not involve such self-criticism. More generally, our confidence in experts should depend on how much their expertise comes from AOT.

AOT is apparently affected by cultural support. Higher education tends to support it (imperfectly, to be sure). I have been using a short questionnaire (based on a longer version devised by Keith Stanovich and his collaborators) that measures people's support for AOT. It has questions such as: "Allowing oneself to be convinced by an opposing argument is a sign of good character"; "Changing your mind is a sign of weakness" (reversed); "People should search actively for reasons why their beliefs might be wrong"; and "It is important to persevere in your beliefs even when evidence is brought to bear against them" (reversed).

I and my collaborators  have found that people who score low on this scale (opposing AOT) tend to score high on Jared Piazza's measure of belief in "divine command theory", the idea that God's laws are beyond the capacity of humans to fully understand and thus must be obeyed without question. Such a view is consistent with the sort of faith that leads people to advance moral views through politics even when they cannot argue for those views in terms that make sense to those who do not share their particular faith.

Dan Kahan has collected data with our AOT questionnaire for a representative sample of the U.S. adult population. People low in AOT tend to rate themselves as more politically conservative and more strongly Republican. (The result is described at the end of this paper) This result is unusual because the same data set finds no correlation at all between conservatism and another measure of cognitive style, the Cognitive Reflection Test (CRT), a set of tricky arithmetic problems that is supposed to measure the tendency to reflect on initial conclusions. But I think the CRT is heavily influenced by mathematical knowledge, and it also tends to assess willingness to spend time on a problem rather than a self-questioning attitude as such. It does not specifically assess the willingness to look for reasons why a pet belief might be wrong. In sum, the results for the AOT show that, in current U.S. politics, conservatives really do think differently, in ways that help us understand their attachments to weakly supported conclusions about facts and morality.

"Conservatives" in the U.S. tend not to think that AOT is a good thing. Many think it is OK to defend their beliefs as if they were possessions. Kahan did not look at Trump supporters in particular. (The data were collected a few years ago.)  But it seems likely that this negative attitude toward AOT is more common in this group than in Republicans in general. Many people who identify themselves as conservative Republicans are very clear about their acceptance of AOT, although they tend to emphasize different issues when they discuss details. And, likewise, some liberals or progressives have the same negative attitude, although not as many of them, at the moment.

On the optimistic side, my student Emlen Metz has found remarkably high levels of respect and capacity for AOT in 8th grade students across the U.S. from a variety of schools. It seems that the ideology of AOT has become widespread in the culture of schools. This is good news for the future. We need to take a page from our youth, to look to their desire to learn from each other and their hope in the future. We need to try to get everyone, journalists, teachers, and ourselves, to understand AOT more deeply.

Even those who already favor AOT can benefit from understanding how it helps with communication, by giving us a better understanding of our discussion partners.

Wednesday, November 9, 2016

Global warming: understanding, and acceptance of expert authority

David Leonhardt, in a recent column in the New York Times (on-line Nov. 9, 2016), argues that the most important response to the Republican takeover of the U.S. is to convince Republicans that global warming is real and serious. Failure to address this problem now has very long lasting effects.

Most of the effort to convince skeptics takes the form "95% of climate scientists agree ...". This sort of argument from authority is useful only if two conditions are met. First, people have to understand that the authorities in question derive their status from a superior method, namely, actively open-minded thinking. Science is inherently actively open-minded because scientists are constantly looking for reasons why conclusions might be wrong. Scientists (and many other scholars) get credit for poking holes in conclusions, even tentative conclusions of their own. As J.S. Mill and others have pointed out, such active openness to criticism is the only way to have confidence in any conclusions. Without the search for counter-arguments, you don't know if they exist.

Second, even with this understanding (which is not widespread), people may distrust a consensus view. Science works slowly. It took quite a while for people to accept Copernicus's alternative to Ptolemy's theory of the planets. Other false views have been the "conventional wisdom" in science for decades. People may suspect that the 95% of scientists are just engaging in herd behavior, not listening sufficiently to the 5% who disagree (if, indeed, they really exist).

Here I think we need to point out to skeptics that the argument from authority is not the only one. In particular, the existence of the (somewhat misnamed) greenhouse effect has been well known for over 100 years. It follows from a couple of basic principles of physics (or physical chemistry). Thus, in addition to acceptance of authority, there is also a role for understanding.

The two principles are roughly these: First, carbon dioxide (and other "greenhouse gases") absorb heat from infrared radiation, while the more basic components of the atmosphere (oxygen and nitrogen) do not. This fact can be demonstrated in table-top experiments, e.g. https://www.youtube.com/watch?v=kwtt51gvaJQ. Second, the earth's surface itself emits infrared radiation when it is warmed by the sun, which, in turn, warms the atmosphere, which then emits radiation as well, thus increasing the overall temperature of both air and land.

Some understanding of these principles leads to the conclusion that global temperature will increase as the amount of carbon dioxide increases, other things being equal. This fact was well understood by the end of the 19th century. Thus, if we understand these principles, we can see that the burden of proof now shifts to those who want to say that warming will not occur. Other things being equal, it must occur.

The extensive efforts of climate scientists have thus been mainly about determining whether these other factors are in fact equal, or whether some of them might reduce -- or increase -- global warming. It turns out that some reduce it and some increase it, leading to uncertainty about how large the effect will be, quantitatively. (This uncertainty poses risks of its own, as the effect could be larger than expected as well as smaller.) The 5% and the 95% may disagree about the relative importance of these other factors, without denying the basic facts.

My conclusion is that, if people understood the basic science, they would have some additional reason to trust the expert consensus. It might help even if they just knew that it existed.  Some attempt to explain the basic science should be part of the public argument. It should not be limited to high-school science courses.

Likewise, actively open-minded thinking itself should be understood, not just accepted on the basis of authority.

Saturday, June 25, 2016

What's wrong with parochialism?

Recently popular political movements have been anti-immigrant, anti-free-trade, and more generally anti-globalization. What these positions share is a lack of concern for outsiders. For example, U.S. discussions of the Trans Pacific Partnership (which has many advantages and disadvantages for everyone) tend to ignore completely the apparent large benefits for Vietnam.  The technical term for this lack of concern is parochialism. In part, parochialism is part of our political language. The use of "we" in refers to fellow citizens, sometimes even excluding members of recently arriving ethnic groups. But some people, in their thinking if not in their speech, consider effects on outsiders, or even think of themselves as members of larger groups such as Europeans or citizens of the world. Once this kind of cosmopolitan thinking was even fashionable, as expressed, for example, in John Lennon's (1971) song "Imagine", and it seems to be coming back into fashion among some young people in Europe.

The simple argument against parochialism is that it is morally arbitrary, hence unjustified. The question of who should count in our moral judgments is a very basic one. The answer cannot be derived from competing philosophical approaches such as utilitarianism or deontology in general. So the usual attack on parochialism of any sort is to ask why a distinction should matter. This was the logical move made against slavery, racial discrimination, and discrimination against women. Of course, the defenders of these institutions sometimes tried to answer this attack by pointing to supposed empirical facts about, for example, how women's emotionality made them unsuitable as voters or office holders. But these arguments were ultimately recognized as post-hoc justifications, with little empirical basis. So the basic argument was, "If you care about what happens to X, why shouldn't you care equally about Y, even though Y is a different race, sex, or nationality?" This kind of logical argument is powerful, yet it is rarely made in public debates.

One counter-argument comes from a different analogy, loyalty to close kin. Equal treatment of everyone would imply that you should care about a stranger's child, spouse, or parent as much as you care about your own. If it is morally acceptable to give preference to loved ones, why not co-nationals too? This objection has several possible answers. One I like is that morality should concern itself with choices among options that are on the table. And the option to sacrificing one's own child for a greater good is not something that most of us would consider. We could just not bring ourselves to do it. (More precisely, our willingness to sacrifice our own concerns and desires is limited, so we should make our decisions so as to do the most good overall within this limit.)

Assuming that this argument works for loved ones -- and I think it does -- then could it also work for co-nationals? Yes, it could, if we feel such strong loyalty to our co-nationals. But we can take a step back and ask where our loyalty comes from. In the case of children, it is biologically determined. However, in the case of co-nationals, it is the result of an acquired abstract category. Even if humans evolved to be loyal to those in their immediate group of non-kin, the extension of group membership to total strangers requires a learned categorization of certain strangers as members of this group. Such categorization cannot plausibly be the result of natural selection, as it is, once again, arbitrary. If we can define "our group" as "German citizens", we could just as easily define it as "European citizens". People who reflect on this arbitrariness may come to change their loyalties.

In sum, it may be too late for those who feel very strongly about their co-nationals. From their perspective, parochialism can be justified, assuming that they cannot modify their feelings by reflection. Yet we can still object to the cultural forces that lead people to think this way, including the assumptions of political discourse itself.

A second line of argument for parochialism concerns the definition of responsibility that comes from the specific social roles. Social organization gives people decision-making authority in limited domains. When people violate these limits, they risk losing their authority, and they set a precedent for subverting a useful system. Police officers are not supposed to make decisions about punishment. That role is left for courts and judges, which are limited in yet other ways.

This is also a good argument, but is the role of a citizen just to advance what is best for their co-nationals. Many citizens do not limit their role in this way, and they are not even considered to be bad citizens as a result. Recent immigrants often think about others from their country of origin who might also want to immigrate. Some people take into account the effects of policy on other countries to which they have secondary loyalty. And still others think about issues that affect the whole world, such as climate change. We have no written rule against such a view of citizenship, nor any obvious social norms.  The narrow definition of the citizen's role as serving only national interest is one that some people arrive at by themselves. It is not part of the social structure of roles, unlike the roles of police officers and judges.

Citizens do have a special responsibility toward their own nation, if only because they are in the best position to know what is good for it. They cannot rely on foreigners to decide on issues that have mostly local effects. But the exercise of this responsibility does not imply that outsiders should simply be neglected. It is a responsibility that applies much more to some issues than to others. As a citizen, we have a special responsibility to inform ourselves about national and local issues that don't have much effect on outsiders, and there are many of these. But just as our concern about city and state issues does not justify neglect of national issues, so our concern with national issues does not justify neglect the world outside.

In sum, the justification for parochialism of the sort we see in current politics seems weak. Would it be possible to confront people with arguments against this view in general? We don't know unless we try.

Saturday, June 18, 2016

Learning social rules

I just read a forthcoming paper (in Mind and Language) by Shaun Nichols and several others, which argues that it is rational to develop moral rules that distinguish (for example) acts and omissions. The relevant idea of "rational" is from rational concept formation.

When you learn a new concept, it is best not to generalize it too much. In some experiments, subjects were given examples of rule violations, for learning, and tested with other examples. When the learning examples were of the form "X did an action A that caused outcome C to happen", subjects generalized this to similar test examples with other examples of A and C. But they did not consider examples of the form "X failed to do B, which would have prevented C from happening" to be violations of the rule. In order to teach subjects that the rule applied to omissions as well as acts, the training had to include omission cases as examples of rule violations.

This behavior of subjects makes perfect sense in the case of arbitrary rules, and even legal rules. But I was bothered because I don't think moral rules should be arbitrary in this way.

One possible explanation of the difference is that sophisticated moral rules arise from reflection on the social rules that we have learned. Specifically, we reflect by asking questions about purposes (which I call "search for goals" in some places). When we see an example of a rule and ask about its purpose, we might discover what general purpose it serves. We can then think about how to generalize it so that it serves that purpose. If it does not serve the purpose in some cases, or if it could serve the same purpose better by a modification, then we can think about improving it.

The same process is part of what it means to understand a design such as a mathematical formula, according to my interpretation of David Perkins' book "Knowledge as Design". For example, we understand the formula for the area of a parallelogram (and its associated arguments) by finding that the argument for this rule serves the purpose of converting the parallelogram to a rectangle, and we already know how to find the area of a rectangle. Once we discover this connection, we can apply the same principle elsewhere, as Max Wertheimer shows in the first chapter of "Productive thinking". We can transfer the principle to cases where it applies while avoiding transfer to other cases.

Similarly, a law with a "loophole" is an example of a rule that is crafted in a way that fails to serve its purpose. We can fix laws by removing loopholes.

A law that gives rights, such as the right to vote, drive, or own property, to men but not to women does not seem to serve reasonable accounts of the purposes of such rights-granting laws. We have trouble coming up with a purpose that applies to men but not women. Any such purpose seems arbitrary; it could just as well distinguish people with odd and even birthdays. Such a search for purposes is, I think, the sort of reflection that Peter Singer discussed in "The expanding circle".

Thus it is one thing to learn a rule, but it is another to understand the rule in a way that allows us to ask whether it serves its purpose as well as it could, and, if not, what could replace it. It may be rational from the perspective of learning to learn whatever we are taught about what to do and not do, but, if this is all we did, we would cut off the possibility of improving these rules.

Could most deontological rules survive this kind of questioning?