Monday, December 25, 2017

Replication reservations

Replication of previously reported studies is sometimes useful or even necessary. Drug companies often try to replicate published research before investing a great deal of money in drug development based on that research. Ordinary academic researchers often want to examine more closely some published result, so they often include a replication of that result in a larger design, or just try to see if they can get the effect before they proceed to make modifications. Failures to replicate are often publishable (e.g., Gong, M., & Baron, J. The generality of the emotion effect on magnitude sensitivity. Journal of Economic Psychology, 32, 17–24, 2011), especially when several failures are included in a meta-analysis (e.g., Finally, people may try to replicate a study when they disagree with its conclusions, possibly because of other theoretical or empirical work they have done.

Researchers are now spending time trying to replicate research studies in the absence of such purposes.  In one project, some students are in the process of trying to replicate most of the papers published in Judgment and Decision Making, the journal I have edited since 2006 ( Let me explain why this bothers me.

First, these projects take time and money that could be spent elsewhere. The alternatives might be more worthwhile, but of course this depends on what they are.

Second, if you want to question a study's conclusions, it is often easier to find a problem with the data analysis or method of the original study. A large proportion of papers published in psychology (varying from field to field) have flaws that can be discovered this way. Many of these flaws are listed in It is possible to publish papers that do nothing but "take down" another published paper, especially if a correct re-analysis of the data yields a conclusion contradicting the original one.

Third, complete replication of a flawed study often succeeds quite well, because it replicates the flaws. A recent paper in the Journal of Personality and Social Psychology (Gawronski et al., 2017. Consequences, norms, and generalized inaction in moral dilemmas: The CNI model of moral decision-making, 113: 343-376) replicated every study in the paper itself. The replication involved new subjects but not new stimuli, but the data analysis ignored variations among the stimuli in the size and direction of the effects of interest (and other methodological problems).

Fourth, what do we conclude when a study does not replicate? Fraud? Dishonesty in reporting? Selective reporting? Luck? Sometimes these explanations can be detected by looking at the data (e.g. And none of them can be inferred from a failure to replicate. So what is the point? Is it to scare journal editors into accepting papers only when they have very clear results that do not challenge existing theories or claims?

Blanket replication of every study is a costly way to provide incentives for editors. Perhaps these "replication factors" for journals are an antidote to the poison of "impact factors". Impact factors encourage publication of surprising results that will get news coverage, and will need to be cited, just because they are surprising. But the very fact that they are surprising increases the probability that something is wrong with them. A "replication index" will discourage publication of such papers. But it will also encourage publication of papers that go to excess to replicate studies within the paper, use large samples of subjects, and, in general, cost a lot of money. This will thus tend to drive out of the field those who are not on the big-grant gravy train (or who are not in schools that provide them with generous research funding). It is better for editors to ignore both concerns.

Fifth, I think that some good studies are unlikely to replicate. I try to publish them anyway. One general category consists of studies that pit two effects against each other, only one of which is interesting. An example is the "polarization effect" of Lord, Ross and Lepper (1979): subjects who opposed or favored capital punishment were presented with two studies, one showing that it deterred serious crimes and the other showing that it did not deter; both groups became more convinced of their original position, because they found ways to dismiss the study that disagreed with it. This result has in fact been replicated, but other attempts to find polarization have failed. The opposite effect is that presenting people with conflicting evidence moves them toward a more moderate position. In order for the polarization effect to "win", it must be strong enough to overcome this rational tendency toward moderation. The conditions for this to happen are surely idosyncratic. The interesting thing is that it happens at all. If the original study is honestly reported and shows a clear effect, then it does happen.

Another example is a study recently published in Judgment and Decision Making (Bruni and Tufano. The value of vulnerability: The transformative capacity of risky trust, 12, 408-414, 2017). The finding of interest was that people who made themselves "vulnerable", by showing that they had trusted someone who had previously been untrustworthy, evoked more trustworthy behavior in trustees who knew of their vulnerability. Again, this result must be strong enough to counter an opposite effect: these vulnerable people could also be seen as suckers, ripe for exploitation. I suspect that this result will not replicate, but I also think it is real. (I examined the data quite carefully.) It may well depend on details of the sample of subjects, the language, and so on. This is not going to help the "replicability index" of the journal (or the impact factor, for that matter, as it is quite a complex study), but I don't care, and I shouldn't care.

Of course, other important studies simply cannot be replicated, because they involve samples of attitudes in a given time and place, e.g., studies of the determinants of political attitudes, the spread of an epidemic, or the structure of an earthquake. What often can be done instead is to look at the data.

In my view, the problem is not so much "replicability" but rather "credibility". Replications will be done when they are worth doing for other reasons. But for general credibility checking, it is probably more efficient to look at the data and the methods. To smooth the path for both replication and examination of data, journals should welcome replications (with either result when the original result is in doubt) and they should require publication of data whenever possible.

Tuesday, February 28, 2017

Explanations of deontological responses to moral dilemmas

Hundreds of experiments have now shown, in various ways, that responses to moral dilemmas often follow deontological rules rather than utilitarian theory. Deontological rules are rules that indicate whether some category of actions is required, permissible, or forbidden. Utilitarianism says that the best choice among those under consideration is that one that does the most expected good for all those affected. For example, utilitarianism implies that it is better to kill one person to save five others than not to kill (other things being equal), while some deontological rule may say that active killing is forbidden, whatever the consequences.

In many of these experiments, deontological responses (DRs) seem to be equivalent to responses that demonstrate cognitive biases in non-moral situations. For example, the omission bias favors harms of omission over less harmful harms caused by acts, in both moral and non-moral situations (Ritov & Baron, 1990). This similarity suggests that the DRs arise from some sort of error, or poor thinking. Much evidence indicates that the cognitive processes supporting moral and non-moral judgments are largely the same (e.g., Greene, 2007). If this is true, the question arises of what sort of thinking is involved, and when it occurs. Several (mutually consistent) possibilities have been suggested:

1. Dual-system theory in its simplest form ("default interventionist" or "sequential") says that DRs arise largely as an immediate intuitive response to a dilemma presented in an experiment, once the dilemma is understood. Then, sometimes, the subject may question the initial intuition and wind up giving the utilitarian response as a result of a second step of reflective thought. The same two-step sequences has been argued to account for many other errors in reasoning, including errors in arithmetic, problem solving, and logic. By this view, the cognitive problem that produces DRs is a failure to check, a failure to get to the second step before responding. This dual-system view has been popularized by Daniel Kahneman in his book "Thinking, fast and slow". I have provided evidence that it is largely incorrect (Baron & Gürçay, 2016).

2. Very similar to this sequential dual-system theory, but different, is the theory of actively open-minded thinking (AOT; Baron, 1995). AOT begins from a view of thinking as search and inference. We search for possible answers to the question at hand, arguments or evidence for or against one possible answer or another, and criteria or values to apply when we evaluate the relative strengths of the answers in view of the arguments at hand. AOT avoids errors in thinking by searching for alternative possibilities, and for arguments and goals that might lead to a higher evaluation of possibile answers other than those that are already strong. By this view, the main source of errors is that thinking is insufficiently self-critical; the thinker looks for support for possibilities that are already strong and fails to look for support for alternatives. In the case of moral dilemmas, the DRs would be those that are already strong at the outset of thinking and would not be subject to sufficient questioning, even though additional thinking may proceed to bolster these responses. The main difference between this view and the sequential dual-system view is that AOT is concerned with the direction of thinking, not the extent of it, although of course there must be some minimal extent if self-criticism is to occur. AOT also defines direction as a continuous quantity, so it does not assume all-or-none "reflection or no reflection". By this account, utilitarian and deontological responses need not differ in the amount of time or effort required for them. Bolstering and questioning need not differ in either direction, in their processing demands.

3. A developmental view extends the AOT view to what happens outside of the experiment (Baron, 2011). Moral principles develop over many years, and they may change as a result of questioning and external challenges. DRs may arise early in development, but that may also depend on the child's environment, how morality is taught. Reflection may lead to increasingly utilitarian views as people question the justification of DRs, especially in cases where following these DRs leads to obviously harmful outcomes. When subjects are faced with moral dilemmas in experiments, they largely apply the principles that they have previously developed, which may be utilitarian, deontological or (most often) both.

4. We can replace "development of the individual" with "social evolution of culture" (Baron, in press). Historically, morality may not have been distinguished from formal law until relatively recently. Law takes the form of DRs. Cultural views persist, historically, even when some people have replaced them with other ways of thinking. Kohlberg has suggest that this sequence happens in development, where the distinction between morality and law is made fairly late. Thus, the course of individual development may to some extent recapitulate the history of cultures.

These alternatives have somewhat different implications for the question of how to make people more utilitarian, if that is what we want to do. (I do.) But the implications are not that different. A view that is consistent with all of them is to emphasize reflective moral education, presenting arguments for and against utilitarian solutions, and encouraging students to think of such arguments themselves (Baron, 1990).

Recently I and others have written several articles criticizing the sequential dual-system view of moral judgment and other tasks, such as problem solving in logic and mathematics (e.g., Baron & Gürçay, 2016; Pennycook et al., 2014). I think it is apparent that, at least in the moral domain, the role of different mechanisms is not a big deal. All these views are consistent with the more general claim that DRs can be understood as errors, and that they need not be seen as "hard wired", but, rather, malleable.


Baron, J. (1990). Thinking about consequences. Journal of Moral Education, 19, 77–87.

Baron, J. (1995). Myside bias in thinking about abortion. Thinking and Reasoning, 1, 221–235.

Baron, J. (2011). Where do non-utilitarian moral rules come from? In J. I. Krueger and E. T. Higgins (Eds.) Social judgment and decision making, pp. 261–278. New York: Psychology Press.

Baron, J. (in press). Utilitarian vs. deontological reasoning: method, results, and theory. In J.-F. Bonnefon & B. Trémolière (forthcoming). Moral inferences. Hove, UK: Psychology Press.

Baron, J. & Gürçay, B. (2016). A meta-analysis of response-time tests of the sequential two-systems model of moral judgment. Memory and Cognition. doi:10.3758/s13421-016-0686-8

Greene, J. D. (2007). The secret joke of Kant’s soul, in W. Sinnott-Armstrong, Ed., Moral psychology, Vol. 3: The neuroscience of morality: Emotion, disease, and development, pp. 36–79. MIT Press, Cambridge, MA.

Pennycook, G., Trippas, D., Handley, S. J., & Thompson, V. A. (2014). Base-rates: Both neglected and intuitive. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40, 544--554.

Ritov, I., & Baron, J. (1990). Reluctance to vaccinate: omission bias and ambiguity. Journal of Behavioral Decision Making, 3, 263–277.

Sunday, February 26, 2017

Two posts on climate and one on health insurance

The editors of RegBlog have accepted three of my recent posts. Rather than duplicate them here (which I am now allowed to do), I am instead making links to them:

How geographic boundaries determine the social cost of carbon;

The discount rate for the social cost of carbon;

Justifying health insurance.

All of these are philosophical comments about regulatory issues that are likely to be addressed by the Trump administration, the U.S. congress, and possibly the courts. But the issues will persist.

Friday, December 23, 2016

More on AOT and the election (rejected op-ed)

The people who voted for Donald Trump were, like those who voted for Hillary Clinton, a varied lot. Some were long-standing Republicans who had doubts about Trump but even more doubts about the idea of another Democratic president. Others were upset about some particular policy that has, in fact, hurt them. But it seems that a great many were older white Christians who had no college education. When I read what some of these voters said about their choice, I find much of it, to oversimplify just a little, false. Something seemed odd about the way they came to hold beliefs with apparent high confidence when these beliefs fly in the face of both good evidence and expert opinion.

If my perception is correct, then I think that psychology can both explain part of what they are doing and provide an optimistic solution for the future, if only the long-term future. Specifically, they are not thinking very well, and the kind of thinking they are not doing is influenced by schooling and culture. I call it actively open-minded thinking, or AOT. The purpose of AOT is to avoid myside bias (also called confirmation bias). Myside bias is a tendency to think in a way that favors pet beliefs: beliefs based on intuition alone, beliefs that one wants to have, or beliefs that have resulted from indoctrination. AOT counteracts myside bias by being open to alternatives and fair in the use of evidence and arguments. Beyond openness and fairness, AOT looks actively for alternatives and reasons why pet beliefs might be wrong.

AOT is the only way to have justified confidence in our beliefs. If we do not check them in this way, we cannot distinguish beliefs that deserve our confidence from those that do not. If we impose such untested beliefs on others through our political behavior, we risk hurting them, in ways that deserve their censure. We are being poor citizens.

AOT is also the basis of expertise of the sort we can trust. Scientists deserve our attention because the very nature of science, as a group activity, is to look for alternatives and evidence. Scientists get credit for poking holes in conventional scientific wisdom, leading to gradual improvement (and sometimes not so gradual). Good journalists, by checking their facts before going public, do the same. That is the difference between, well, the New York Times and fake news. None of these institutions is perfect, but they are more likely to reach good conclusions than they would reach without critical reflection. If people understand AOT, they will understand how to evaluate sources. They do not have to think through everything by themselves, because they can trust others who know how to think well, and they can recognize when this is being done.  It is thus important for people to understand AOT and why it helps, as well as being able to do it themselves, when needed.

People differ enormously in their understanding of AOT and in their tendency to do it, as opposed to displaying myside bias. For example, one manifestation of myside bias is "belief overkill", the tendency to think that all arguments point in the same direction, for example, the belief that genetically modified food has no benefits, only risks, or that it has no risks, only benefits. I have found that some people display this bias at almost every opportunity, while others do not show the effect at all -- they can recognize the down side of their opinion while still believing that the up side predominates. Similar individual differences are found in other manifestations of AOT.

Myside bias is thus not a necessary part of the human condition. Some cultural practices, such as the discouragement of certain kinds of curiosity in children, may even lead people to oppose AOT when they would otherwise do it naturally. Education in the U.S., especially college education, encourages AOT. Students who write papers that neglect arguments on the other side of their conclusion do not usually get A's, even if they write well otherwise. AOT is part of what it means to be a responsible scholar and a good student. People who oppose AOT may also stay away from higher education because they do not want to think this way. The correlation between Trump support and lack of higher education may have more than one explanation.

AOT is malleable. People can be taught to engage in it. It is part of what is called cognitive style, not the result of how the brain is "hard wired". It is related to cognitive skills, but not dependent on them. Although AOT is emphasized in college, even elementary schools encourage it, if only indirectly by teaching tolerance and openness to others who are different. Several studies have shown that it can be taught by explicit instruction, especially when the idea is explained. Understanding is important, as well as some practice combined with self-evaluation. One of the most impressive of these studies was reported in 1986 by David Perkins and his collaborators. He found that myside bias was reduced by a few weeks of explicit instruction, but not by participation in other activities such as a high-school debating team. The debating team emphasized winning the argument, as opposed to discovering the best conclusion.

AOT is not a new idea. John Stuart Mill, in "On liberty", provided perhaps its most eloquent defense, using different terms. John Dewey argued that education should teaching something like it. After World War II, psychologists tried to understand support for fascism by looking at the personality traits of its supporters. It reached similar conclusions about individual differences and the role of open-mindedness, although this tradition largely neglected the potential role of education. It is on this latter point where the most recent research provides reason for optimism, even in the darkest times.

Sunday, November 13, 2016

The kind of thinking that led to Trump's election

Donald Trump's most fervent supporters consisted of elements of the Republican coalition as it has developed over the last several years. These elements are characterized by the form of their beliefs. They confidently hold beliefs without regard to the full set of relevant arguments and evidence.

Some of their beliefs are admittedly based on "faith", which means that they are held confidently despite the absence of reasons that would matter to others who do not share their faith. These include both moral and factual beliefs: homosexuality is sinful; humans did not evolve from other animals; abortion is equivalent to murder.

Other beliefs are accepted uncritically, without considering arguments on the other side, or, worse, attributing those arguments to a conspiracy. The belief that global warming is not caused by human activity is a prime example. Conspiracy theories about collusion by scientists guarantee that all arguments on the other side can be ignored. Thus, the holders of these beliefs have no way to discover that they might be incorrect, since all counter-arguments seem to come from a diabolical conspiracy. People hold these questionable beliefs with high confidence and are thus willing to impose them on others, through their political action.

These beliefs have been supported by talk radio and Fox news. In turn, these sources are supported and encouraged by more traditional Republicans. We saw this influence earlier in the "Tea Party" movement, which feeds back into the content of these sources.

More generally, these beliefs are the result of a pattern of thinking. People who hold them are lacking in what I have called actively open-minded thinking (AOT). AOT involves self-critical reflection. In order to be confident of a belief, we must look actively for reasons why it should be rejected, modified, or held with less confidence. If beliefs go unchallenged by this sort of reflection, we have no reason to know that they deserve high confidence. We have no way to distinguish those that would survive critical reflection from those that would not.

Note that AOT is not equivalent to reflection in general. Some reflection is nothing more than bolstering, looking for reasons why we were right all along, finding ways to make everything fit with our preferred conclusion. AOT concerns the direction of reflection, not just its amount. The ultimate aim is to be fair to counter-arguments.

Nor does AOT require that we never hold beliefs with high confidence. Often, an active search can find little to say for the other side, or can lead to a modified form of the original belief that is designed to deal with problems.

AOT is built in to the scientific method. Science as an institution works to look for holes in current theories. Scientists get credit for finding such holes, and ways to plug them by modifying or discarding the theory in question. For these reasons, we should have more confidence in the conclusions of science than we have in the conclusions of ways of thinking that do not involve such self-criticism. More generally, our confidence in experts should depend on how much their expertise comes from AOT.

AOT is apparently affected by cultural support. Higher education tends to support it (imperfectly, to be sure). I have been using a short questionnaire (based on a longer version devised by Keith Stanovich and his collaborators) that measures people's support for AOT. It has questions such as: "Allowing oneself to be convinced by an opposing argument is a sign of good character"; "Changing your mind is a sign of weakness" (reversed); "People should search actively for reasons why their beliefs might be wrong"; and "It is important to persevere in your beliefs even when evidence is brought to bear against them" (reversed).

I and my collaborators  have found that people who score low on this scale (opposing AOT) tend to score high on Jared Piazza's measure of belief in "divine command theory", the idea that God's laws are beyond the capacity of humans to fully understand and thus must be obeyed without question. Such a view is consistent with the sort of faith that leads people to advance moral views through politics even when they cannot argue for those views in terms that make sense to those who do not share their particular faith.

Dan Kahan has collected data with our AOT questionnaire for a representative sample of the U.S. adult population. People low in AOT tend to rate themselves as more politically conservative and more strongly Republican. (The result is described at the end of this paper) This result is unusual because the same data set finds no correlation at all between conservatism and another measure of cognitive style, the Cognitive Reflection Test (CRT), a set of tricky arithmetic problems that is supposed to measure the tendency to reflect on initial conclusions. But I think the CRT is heavily influenced by mathematical knowledge, and it also tends to assess willingness to spend time on a problem rather than a self-questioning attitude as such. It does not specifically assess the willingness to look for reasons why a pet belief might be wrong. In sum, the results for the AOT show that, in current U.S. politics, conservatives really do think differently, in ways that help us understand their attachments to weakly supported conclusions about facts and morality.

"Conservatives" in the U.S. tend not to think that AOT is a good thing. Many think it is OK to defend their beliefs as if they were possessions. Kahan did not look at Trump supporters in particular. (The data were collected a few years ago.)  But it seems likely that this negative attitude toward AOT is more common in this group than in Republicans in general. Many people who identify themselves as conservative Republicans are very clear about their acceptance of AOT, although they tend to emphasize different issues when they discuss details. And, likewise, some liberals or progressives have the same negative attitude, although not as many of them, at the moment.

On the optimistic side, my student Emlen Metz has found remarkably high levels of respect and capacity for AOT in 8th grade students across the U.S. from a variety of schools. It seems that the ideology of AOT has become widespread in the culture of schools. This is good news for the future. We need to take a page from our youth, to look to their desire to learn from each other and their hope in the future. We need to try to get everyone, journalists, teachers, and ourselves, to understand AOT more deeply.

Even those who already favor AOT can benefit from understanding how it helps with communication, by giving us a better understanding of our discussion partners.

Wednesday, November 9, 2016

Global warming: understanding, and acceptance of expert authority

David Leonhardt, in a recent column in the New York Times (on-line Nov. 9, 2016), argues that the most important response to the Republican takeover of the U.S. is to convince Republicans that global warming is real and serious. Failure to address this problem now has very long lasting effects.

Most of the effort to convince skeptics takes the form "95% of climate scientists agree ...". This sort of argument from authority is useful only if two conditions are met. First, people have to understand that the authorities in question derive their status from a superior method, namely, actively open-minded thinking. Science is inherently actively open-minded because scientists are constantly looking for reasons why conclusions might be wrong. Scientists (and many other scholars) get credit for poking holes in conclusions, even tentative conclusions of their own. As J.S. Mill and others have pointed out, such active openness to criticism is the only way to have confidence in any conclusions. Without the search for counter-arguments, you don't know if they exist.

Second, even with this understanding (which is not widespread), people may distrust a consensus view. Science works slowly. It took quite a while for people to accept Copernicus's alternative to Ptolemy's theory of the planets. Other false views have been the "conventional wisdom" in science for decades. People may suspect that the 95% of scientists are just engaging in herd behavior, not listening sufficiently to the 5% who disagree (if, indeed, they really exist).

Here I think we need to point out to skeptics that the argument from authority is not the only one. In particular, the existence of the (somewhat misnamed) greenhouse effect has been well known for over 100 years. It follows from a couple of basic principles of physics (or physical chemistry). Thus, in addition to acceptance of authority, there is also a role for understanding.

The two principles are roughly these: First, carbon dioxide (and other "greenhouse gases") absorb heat from infrared radiation, while the more basic components of the atmosphere (oxygen and nitrogen) do not. This fact can be demonstrated in table-top experiments, e.g. Second, the earth's surface itself emits infrared radiation when it is warmed by the sun, which, in turn, warms the atmosphere, which then emits radiation as well, thus increasing the overall temperature of both air and land.

Some understanding of these principles leads to the conclusion that global temperature will increase as the amount of carbon dioxide increases, other things being equal. This fact was well understood by the end of the 19th century. Thus, if we understand these principles, we can see that the burden of proof now shifts to those who want to say that warming will not occur. Other things being equal, it must occur.

The extensive efforts of climate scientists have thus been mainly about determining whether these other factors are in fact equal, or whether some of them might reduce -- or increase -- global warming. It turns out that some reduce it and some increase it, leading to uncertainty about how large the effect will be, quantitatively. (This uncertainty poses risks of its own, as the effect could be larger than expected as well as smaller.) The 5% and the 95% may disagree about the relative importance of these other factors, without denying the basic facts.

My conclusion is that, if people understood the basic science, they would have some additional reason to trust the expert consensus. It might help even if they just knew that it existed.  Some attempt to explain the basic science should be part of the public argument. It should not be limited to high-school science courses.

Likewise, actively open-minded thinking itself should be understood, not just accepted on the basis of authority.

Saturday, June 25, 2016

What's wrong with parochialism?

Recently popular political movements have been anti-immigrant, anti-free-trade, and more generally anti-globalization. What these positions share is a lack of concern for outsiders. For example, U.S. discussions of the Trans Pacific Partnership (which has many advantages and disadvantages for everyone) tend to ignore completely the apparent large benefits for Vietnam.  The technical term for this lack of concern is parochialism. In part, parochialism is part of our political language. The use of "we" in refers to fellow citizens, sometimes even excluding members of recently arriving ethnic groups. But some people, in their thinking if not in their speech, consider effects on outsiders, or even think of themselves as members of larger groups such as Europeans or citizens of the world. Once this kind of cosmopolitan thinking was even fashionable, as expressed, for example, in John Lennon's (1971) song "Imagine", and it seems to be coming back into fashion among some young people in Europe.

The simple argument against parochialism is that it is morally arbitrary, hence unjustified. The question of who should count in our moral judgments is a very basic one. The answer cannot be derived from competing philosophical approaches such as utilitarianism or deontology in general. So the usual attack on parochialism of any sort is to ask why a distinction should matter. This was the logical move made against slavery, racial discrimination, and discrimination against women. Of course, the defenders of these institutions sometimes tried to answer this attack by pointing to supposed empirical facts about, for example, how women's emotionality made them unsuitable as voters or office holders. But these arguments were ultimately recognized as post-hoc justifications, with little empirical basis. So the basic argument was, "If you care about what happens to X, why shouldn't you care equally about Y, even though Y is a different race, sex, or nationality?" This kind of logical argument is powerful, yet it is rarely made in public debates.

One counter-argument comes from a different analogy, loyalty to close kin. Equal treatment of everyone would imply that you should care about a stranger's child, spouse, or parent as much as you care about your own. If it is morally acceptable to give preference to loved ones, why not co-nationals too? This objection has several possible answers. One I like is that morality should concern itself with choices among options that are on the table. And the option to sacrificing one's own child for a greater good is not something that most of us would consider. We could just not bring ourselves to do it. (More precisely, our willingness to sacrifice our own concerns and desires is limited, so we should make our decisions so as to do the most good overall within this limit.)

Assuming that this argument works for loved ones -- and I think it does -- then could it also work for co-nationals? Yes, it could, if we feel such strong loyalty to our co-nationals. But we can take a step back and ask where our loyalty comes from. In the case of children, it is biologically determined. However, in the case of co-nationals, it is the result of an acquired abstract category. Even if humans evolved to be loyal to those in their immediate group of non-kin, the extension of group membership to total strangers requires a learned categorization of certain strangers as members of this group. Such categorization cannot plausibly be the result of natural selection, as it is, once again, arbitrary. If we can define "our group" as "German citizens", we could just as easily define it as "European citizens". People who reflect on this arbitrariness may come to change their loyalties.

In sum, it may be too late for those who feel very strongly about their co-nationals. From their perspective, parochialism can be justified, assuming that they cannot modify their feelings by reflection. Yet we can still object to the cultural forces that lead people to think this way, including the assumptions of political discourse itself.

A second line of argument for parochialism concerns the definition of responsibility that comes from the specific social roles. Social organization gives people decision-making authority in limited domains. When people violate these limits, they risk losing their authority, and they set a precedent for subverting a useful system. Police officers are not supposed to make decisions about punishment. That role is left for courts and judges, which are limited in yet other ways.

This is also a good argument, but is the role of a citizen just to advance what is best for their co-nationals. Many citizens do not limit their role in this way, and they are not even considered to be bad citizens as a result. Recent immigrants often think about others from their country of origin who might also want to immigrate. Some people take into account the effects of policy on other countries to which they have secondary loyalty. And still others think about issues that affect the whole world, such as climate change. We have no written rule against such a view of citizenship, nor any obvious social norms.  The narrow definition of the citizen's role as serving only national interest is one that some people arrive at by themselves. It is not part of the social structure of roles, unlike the roles of police officers and judges.

Citizens do have a special responsibility toward their own nation, if only because they are in the best position to know what is good for it. They cannot rely on foreigners to decide on issues that have mostly local effects. But the exercise of this responsibility does not imply that outsiders should simply be neglected. It is a responsibility that applies much more to some issues than to others. As a citizen, we have a special responsibility to inform ourselves about national and local issues that don't have much effect on outsiders, and there are many of these. But just as our concern about city and state issues does not justify neglect of national issues, so our concern with national issues does not justify neglect the world outside.

In sum, the justification for parochialism of the sort we see in current politics seems weak. Would it be possible to confront people with arguments against this view in general? We don't know unless we try.