Thursday, May 10, 2018

Prediction, accommodation and pre-registration

Some scientists think that confirming a prediction made before data collection is more convincing than explaining the data after they arrive. I think that this belief is one source (among others) of support for the practice of pre-registration, in which authors publish a plan for an experiment they intend to do, explaining the hypothesis, methods, and data analysis in advance.

Paul Horwich ("Probability and evidence", 1982, pp. 108-117) 
has a complex discussion of this issue, under the heading "Prediction vs. accommodation", but I want to try to provide a more intuitive account of why I do not think that it matters.

Let me take an example based very loosely on my own research. Suppose I want to study evaluative judgment of acts vs. omissions. I got interested in this because of vaccine resistance in the real world. It seemed likely that this resistance resulted from a bias against potentially harmful acts (vaccination, with side effects) compared to potentially more harmful omissions (non-vaccination, with exposure to a natural disease). I thought that this bias was a sort of heuristic, in which people tend to think of action as inherently risky, regardless of what they are told about its risks.

I designed an experiment to examine judgments of acts and omissions, using a variety of scenarios. Some of the scenarios involved money rather than disease risk, so it was possible to include items involving monetary gains as well as losses. I expected that subjects would just be biased against omissions, period.

When I got the data, I noticed something unexpected. Yes, I found the expected bias for vaccination scenarios and for money scenarios that involved losses. In the money scenarios, subjects evaluated action leading to a loss as somewhat worse than omission (doing nothing) leading to the same loss with a higher probability. But when I looked at gains, the results were reversed. Subjects were biased toward action, not omission. They evaluated action leading to a chance on winning some money more highly than omission leading to a somewhat larger chance of winning the same amount.

It was easy to explain my results in hindsight. Action, as opposed to omission, simply amplified the effect on choice of whatever outcome it produced. Subjects were not generally biased against action: it depended on whether the outcomes of action were good or bad. The association with action served to focus attention on the effect of the action, so the action looked better if its outcome was good, and worse if its outcome was bad. (In real life, the amplification effect exists, along with the omission heuristic, but it was not unexpected, as I knew that Janet Landman had already reported it with a different dependent measure. And this "one experiment" is actually a conflation of several done in collaboration with Ilana Ritov.)

Suppose I had stopped there and reported what I expected and what I found. Compare this to a case in which I predicted (expected) the amplification effect. Should your belief in the existence of the amplification effect depend on whether I predicted it or not (all else being equal)?

Note that these two cases can be compared with a third case, in which I report the result for gains but falsely claim that I predicted it, when, actually, I expected a different result. This is the sort of thing that pre-registration of research hypotheses prevents. But, if the two cases just mentioned do not differ in the credibility of amplification as an explanation, such "cheating" would simply be a type of puffery, cheap talk designed to influence the reader without conveying any truly relevant information, like many other things that authors do to make their conclusions sound good, such as baldly asserting that the result is very important. The simple way for editors to avoid such puffery is to change all statements about prediction to statements of questions that the research might answer. Thus "I predicted a general bias toward omission" would be edited to say "I asked whether the bias toward omission was general, for gains as well as losses."

Let me now return to the original question of whether prior prediction matters. In both cases, we have two plausible explanations: a bias against omission, and an amplification effect. What determines the credibility of the amplification hypothesis? Of course, it fits the data, but that helps equally regardless of the prediction. Some doubt may remain, whether I predicted the result or not.

The other determinant of credibility of an explanation is its plausibility, compared to the plausibility of alternatives. In a well written paper, I would give my reasons for thinking that various explanations are plausible. Readers would also have their own reasons. Above I suggested the reasons for each: a heuristic against action; and an attention effect favoring actions. In a paper, I would try to spell these out "with four-part harmony" (to steal a line from Arlo Guthrie), with citations of related work (such as the "feature-positive effect"), and so on.

Should it then add anything that I predicted the result that I got? The answer to this depends on whether my prediction provides additional evidence for this explanation, that is, additional reason to believe it, aside from everything you now know about its plausibility. But, if I have done my job as a writer, I have told you everything I know that is relevant to making my final explanation (amplification) plausible. The fact that I predicted it provides additional relevant information only if I am withholding something relevant that I know. I have no incentive to do that, unless it is somewhat embarrassing. Authors might be embarrassed to say that they made the prediction because God came to them in a dream and told them what was true. Or, more realistically, "I predicted this result intuitively, for reasons I cannot explain, but you should take this seriously because my intuition is very good." As a reader, you are not going to buy that.

In some research, the favored explanation seems pretty implausible, even though it is consistent with the data presented, despite the efforts of the author to convince us that it is plausible.  These cases include some of the "gee whiz" studies in social psychology that raise questions about replicability, but also some of the research in which a precise mathematical model fits the data surprisingly well but otherwise seems to come out of the blue sky.  These cases of low plausibility are the ones where claims that the results were predicted (e.g., in a pre-registration) are thought to be most relevant.

For example, suppose I found no significant "omission bias" overall but did find it if I restrict the sample to those who identify themselves as Protestant Christian.  I supported this restriction with (highly selected) quotations from Protestant texts, thus explaining the result in terms of religious doctrine. You would rightly be suspicious. You would (rightly) suspect that you could find just as many quotations as I found to support the conclusion that Protestant doctrine emphasized sins of omission as well as sins of commission, and that other religions were no different. Would it help convince you of the reality of my effect if you knew that I predicted it but didn't tell you any more about why? You might just think that I was a little nuts, and, well, lucky.

Pre-registration thus does not solve the problem posed by implausible explanations. Of course they might be true, despite being implausible, but that must be established later.  What matters in making an explanation legitimately credible are, first, its fit with the data (compared to alternative explanations) and, second, its fit with other things that we know (again, compared to alternatives). The order in which a researcher thought of things, by itself, provides no additional relevant information.

Going beyond my restatement of Horwich's arguments, analogous reasoning applies to data analysis. One of the nasty things that researchers do is fiddle with their data until they get the result they want. For example, I might fail to find a significant difference in the mean ratings of acts and omissions, but I might find a difference using the maximum rating given by each subject to omissions and to actions, across several scenarios. Pre-registration avoids this fiddling, if researchers follow their pre-registered plan. Doing this, however, discourages the researcher from making reasonable accommodation to the data as they are, such as eliminating unanticipated but nonsensical responses, or transforming data that are turn out to be highly skewed.

But note that many of the statistical options that are used for such p-hacking are ones that do not naturally fit the data very well. Again, it is possible to make up a story about why they do fit the data, but such stories usually tend to be unconvincing, just like the example of Protestantism described above. Thus, data analysis, like explanations, must be "plausible" in order to be convincing.

Monday, December 25, 2017

Replication reservations

Replication of previously reported studies is sometimes useful or even necessary. Drug companies often try to replicate published research before investing a great deal of money in drug development based on that research. Ordinary academic researchers often want to examine more closely some published result, so they often include a replication of that result in a larger design, or just try to see if they can get the effect before they proceed to make modifications. Failures to replicate are often publishable (e.g., Gong, M., & Baron, J. The generality of the emotion effect on magnitude sensitivity. Journal of Economic Psychology, 32, 17–24, 2011), especially when several failures are included in a meta-analysis (e.g., http://journal.sjdm.org/14/14321/jdm14321.html). Finally, people may try to replicate a study when they disagree with its conclusions, possibly because of other theoretical or empirical work they have done.

Researchers are now spending time trying to replicate research studies in the absence of such purposes.  In one project, some students are in the process of trying to replicate most of the papers published in Judgment and Decision Making, the journal I have edited since 2006 (https://osf.io/d7za8/). Let me explain why this bothers me.

First, these projects take time and money that could be spent elsewhere. The alternatives might be more worthwhile, but of course this depends on what they are.

Second, if you want to question a study's conclusions, it is often easier to find a problem with the data analysis or method of the original study. A large proportion of papers published in psychology (varying from field to field) have flaws that can be discovered this way. Many of these flaws are listed in http://journal.sjdm.org/stat.htm. It is possible to publish papers that do nothing but "take down" another published paper, especially if a correct re-analysis of the data yields a conclusion contradicting the original one.

Third, complete replication of a flawed study often succeeds quite well, because it replicates the flaws. A recent paper in the Journal of Personality and Social Psychology (Gawronski et al., 2017. Consequences, norms, and generalized inaction in moral dilemmas: The CNI model of moral decision-making, 113: 343-376) replicated every study in the paper itself. The replication involved new subjects but not new stimuli, but the data analysis ignored variations among the stimuli in the size and direction of the effects of interest (and other methodological problems).

Fourth, what do we conclude when a study does not replicate? Fraud? Dishonesty in reporting? Selective reporting? Luck? Sometimes these explanations can be detected by looking at the data (e.g. http://retractionwatch.com/2013/09/10/real-problems-with-retracted-shame-and-money-paper-revealed/#more-15597). And none of them can be inferred from a failure to replicate. So what is the point? Is it to scare journal editors into accepting papers only when they have very clear results that do not challenge existing theories or claims?

Blanket replication of every study is a costly way to provide incentives for editors. Perhaps these "replication factors" for journals are an antidote to the poison of "impact factors". Impact factors encourage publication of surprising results that will get news coverage, and will need to be cited, just because they are surprising. But the very fact that they are surprising increases the probability that something is wrong with them. A "replication index" will discourage publication of such papers. But it will also encourage publication of papers that go to excess to replicate studies within the paper, use large samples of subjects, and, in general, cost a lot of money. This will thus tend to drive out of the field those who are not on the big-grant gravy train (or who are not in schools that provide them with generous research funding). It is better for editors to ignore both concerns.

Fifth, I think that some good studies are unlikely to replicate. I try to publish them anyway. One general category consists of studies that pit two effects against each other, only one of which is interesting. An example is the "polarization effect" of Lord, Ross and Lepper (1979): subjects who opposed or favored capital punishment were presented with two studies, one showing that it deterred serious crimes and the other showing that it did not deter; both groups became more convinced of their original position, because they found ways to dismiss the study that disagreed with it. This result has in fact been replicated, but other attempts to find polarization have failed. The opposite effect is that presenting people with conflicting evidence moves them toward a more moderate position. In order for the polarization effect to "win", it must be strong enough to overcome this rational tendency toward moderation. The conditions for this to happen are surely idosyncratic. The interesting thing is that it happens at all. If the original study is honestly reported and shows a clear effect, then it does happen.



Another example is a study recently published in Judgment and Decision Making (Bruni and Tufano. The value of vulnerability: The transformative capacity of risky trust, 12, 408-414, 2017). The finding of interest was that people who made themselves "vulnerable", by showing that they had trusted someone who had previously been untrustworthy, evoked more trustworthy behavior in trustees who knew of their vulnerability. Again, this result must be strong enough to counter an opposite effect: these vulnerable people could also be seen as suckers, ripe for exploitation. I suspect that this result will not replicate, but I also think it is real. (I examined the data quite carefully.) It may well depend on details of the sample of subjects, the language, and so on. This is not going to help the "replicability index" of the journal (or the impact factor, for that matter, as it is quite a complex study), but I don't care, and I shouldn't care.

Of course, other important studies simply cannot be replicated, because they involve samples of attitudes in a given time and place, e.g., studies of the determinants of political attitudes, the spread of an epidemic, or the structure of an earthquake. What often can be done instead is to look at the data.

In my view, the problem is not so much "replicability" but rather "credibility". Replications will be done when they are worth doing for other reasons. But for general credibility checking, it is probably more efficient to look at the data and the methods. To smooth the path for both replication and examination of data, journals should welcome replications (with either result when the original result is in doubt) and they should require publication of data whenever possible.

Tuesday, February 28, 2017

Explanations of deontological responses to moral dilemmas

Hundreds of experiments have now shown, in various ways, that responses to moral dilemmas often follow deontological rules rather than utilitarian theory. Deontological rules are rules that indicate whether some category of actions is required, permissible, or forbidden. Utilitarianism says that the best choice among those under consideration is that one that does the most expected good for all those affected. For example, utilitarianism implies that it is better to kill one person to save five others than not to kill (other things being equal), while some deontological rule may say that active killing is forbidden, whatever the consequences.

In many of these experiments, deontological responses (DRs) seem to be equivalent to responses that demonstrate cognitive biases in non-moral situations. For example, the omission bias favors harms of omission over less harmful harms caused by acts, in both moral and non-moral situations (Ritov & Baron, 1990). This similarity suggests that the DRs arise from some sort of error, or poor thinking. Much evidence indicates that the cognitive processes supporting moral and non-moral judgments are largely the same (e.g., Greene, 2007). If this is true, the question arises of what sort of thinking is involved, and when it occurs. Several (mutually consistent) possibilities have been suggested:

1. Dual-system theory in its simplest form ("default interventionist" or "sequential") says that DRs arise largely as an immediate intuitive response to a dilemma presented in an experiment, once the dilemma is understood. Then, sometimes, the subject may question the initial intuition and wind up giving the utilitarian response as a result of a second step of reflective thought. The same two-step sequences has been argued to account for many other errors in reasoning, including errors in arithmetic, problem solving, and logic. By this view, the cognitive problem that produces DRs is a failure to check, a failure to get to the second step before responding. This dual-system view has been popularized by Daniel Kahneman in his book "Thinking, fast and slow". I have provided evidence that it is largely incorrect (Baron & Gürçay, 2016).

2. Very similar to this sequential dual-system theory, but different, is the theory of actively open-minded thinking (AOT; Baron, 1995). AOT begins from a view of thinking as search and inference. We search for possible answers to the question at hand, arguments or evidence for or against one possible answer or another, and criteria or values to apply when we evaluate the relative strengths of the answers in view of the arguments at hand. AOT avoids errors in thinking by searching for alternative possibilities, and for arguments and goals that might lead to a higher evaluation of possibile answers other than those that are already strong. By this view, the main source of errors is that thinking is insufficiently self-critical; the thinker looks for support for possibilities that are already strong and fails to look for support for alternatives. In the case of moral dilemmas, the DRs would be those that are already strong at the outset of thinking and would not be subject to sufficient questioning, even though additional thinking may proceed to bolster these responses. The main difference between this view and the sequential dual-system view is that AOT is concerned with the direction of thinking, not the extent of it, although of course there must be some minimal extent if self-criticism is to occur. AOT also defines direction as a continuous quantity, so it does not assume all-or-none "reflection or no reflection". By this account, utilitarian and deontological responses need not differ in the amount of time or effort required for them. Bolstering and questioning need not differ in either direction, in their processing demands.

3. A developmental view extends the AOT view to what happens outside of the experiment (Baron, 2011). Moral principles develop over many years, and they may change as a result of questioning and external challenges. DRs may arise early in development, but that may also depend on the child's environment, how morality is taught. Reflection may lead to increasingly utilitarian views as people question the justification of DRs, especially in cases where following these DRs leads to obviously harmful outcomes. When subjects are faced with moral dilemmas in experiments, they largely apply the principles that they have previously developed, which may be utilitarian, deontological or (most often) both.

4. We can replace "development of the individual" with "social evolution of culture" (Baron, in press). Historically, morality may not have been distinguished from formal law until relatively recently. Law takes the form of DRs. Cultural views persist, historically, even when some people have replaced them with other ways of thinking. Kohlberg has suggest that this sequence happens in development, where the distinction between morality and law is made fairly late. Thus, the course of individual development may to some extent recapitulate the history of cultures.

These alternatives have somewhat different implications for the question of how to make people more utilitarian, if that is what we want to do. (I do.) But the implications are not that different. A view that is consistent with all of them is to emphasize reflective moral education, presenting arguments for and against utilitarian solutions, and encouraging students to think of such arguments themselves (Baron, 1990).

Recently I and others have written several articles criticizing the sequential dual-system view of moral judgment and other tasks, such as problem solving in logic and mathematics (e.g., Baron & Gürçay, 2016; Pennycook et al., 2014). I think it is apparent that, at least in the moral domain, the role of different mechanisms is not a big deal. All these views are consistent with the more general claim that DRs can be understood as errors, and that they need not be seen as "hard wired", but, rather, malleable.

References

Baron, J. (1990). Thinking about consequences. Journal of Moral Education, 19, 77–87.

Baron, J. (1995). Myside bias in thinking about abortion. Thinking and Reasoning, 1, 221–235.

Baron, J. (2011). Where do non-utilitarian moral rules come from? In J. I. Krueger and E. T. Higgins (Eds.) Social judgment and decision making, pp. 261–278. New York: Psychology Press.

Baron, J. (in press). Utilitarian vs. deontological reasoning: method, results, and theory. In J.-F. Bonnefon & B. Trémolière (forthcoming). Moral inferences. Hove, UK: Psychology Press.

Baron, J. & Gürçay, B. (2016). A meta-analysis of response-time tests of the sequential two-systems model of moral judgment. Memory and Cognition. doi:10.3758/s13421-016-0686-8

Greene, J. D. (2007). The secret joke of Kant’s soul, in W. Sinnott-Armstrong, Ed., Moral psychology, Vol. 3: The neuroscience of morality: Emotion, disease, and development, pp. 36–79. MIT Press, Cambridge, MA.

Pennycook, G., Trippas, D., Handley, S. J., & Thompson, V. A. (2014). Base-rates: Both neglected and intuitive. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40, 544--554.

Ritov, I., & Baron, J. (1990). Reluctance to vaccinate: omission bias and ambiguity. Journal of Behavioral Decision Making, 3, 263–277.

Sunday, February 26, 2017

Two posts on climate and one on health insurance

The editors of RegBlog have accepted three of my recent posts. Rather than duplicate them here (which I am now allowed to do), I am instead making links to them:

How geographic boundaries determine the social cost of carbon;

The discount rate for the social cost of carbon;

Justifying health insurance.

All of these are philosophical comments about regulatory issues that are likely to be addressed by the Trump administration, the U.S. congress, and possibly the courts. But the issues will persist.

Friday, December 23, 2016

More on AOT and the election (rejected op-ed)

The people who voted for Donald Trump were, like those who voted for Hillary Clinton, a varied lot. Some were long-standing Republicans who had doubts about Trump but even more doubts about the idea of another Democratic president. Others were upset about some particular policy that has, in fact, hurt them. But it seems that a great many were older white Christians who had no college education. When I read what some of these voters said about their choice, I find much of it, to oversimplify just a little, false. Something seemed odd about the way they came to hold beliefs with apparent high confidence when these beliefs fly in the face of both good evidence and expert opinion.

If my perception is correct, then I think that psychology can both explain part of what they are doing and provide an optimistic solution for the future, if only the long-term future. Specifically, they are not thinking very well, and the kind of thinking they are not doing is influenced by schooling and culture. I call it actively open-minded thinking, or AOT. The purpose of AOT is to avoid myside bias (also called confirmation bias). Myside bias is a tendency to think in a way that favors pet beliefs: beliefs based on intuition alone, beliefs that one wants to have, or beliefs that have resulted from indoctrination. AOT counteracts myside bias by being open to alternatives and fair in the use of evidence and arguments. Beyond openness and fairness, AOT looks actively for alternatives and reasons why pet beliefs might be wrong.

AOT is the only way to have justified confidence in our beliefs. If we do not check them in this way, we cannot distinguish beliefs that deserve our confidence from those that do not. If we impose such untested beliefs on others through our political behavior, we risk hurting them, in ways that deserve their censure. We are being poor citizens.

AOT is also the basis of expertise of the sort we can trust. Scientists deserve our attention because the very nature of science, as a group activity, is to look for alternatives and evidence. Scientists get credit for poking holes in conventional scientific wisdom, leading to gradual improvement (and sometimes not so gradual). Good journalists, by checking their facts before going public, do the same. That is the difference between, well, the New York Times and fake news. None of these institutions is perfect, but they are more likely to reach good conclusions than they would reach without critical reflection. If people understand AOT, they will understand how to evaluate sources. They do not have to think through everything by themselves, because they can trust others who know how to think well, and they can recognize when this is being done.  It is thus important for people to understand AOT and why it helps, as well as being able to do it themselves, when needed.

People differ enormously in their understanding of AOT and in their tendency to do it, as opposed to displaying myside bias. For example, one manifestation of myside bias is "belief overkill", the tendency to think that all arguments point in the same direction, for example, the belief that genetically modified food has no benefits, only risks, or that it has no risks, only benefits. I have found that some people display this bias at almost every opportunity, while others do not show the effect at all -- they can recognize the down side of their opinion while still believing that the up side predominates. Similar individual differences are found in other manifestations of AOT.

Myside bias is thus not a necessary part of the human condition. Some cultural practices, such as the discouragement of certain kinds of curiosity in children, may even lead people to oppose AOT when they would otherwise do it naturally. Education in the U.S., especially college education, encourages AOT. Students who write papers that neglect arguments on the other side of their conclusion do not usually get A's, even if they write well otherwise. AOT is part of what it means to be a responsible scholar and a good student. People who oppose AOT may also stay away from higher education because they do not want to think this way. The correlation between Trump support and lack of higher education may have more than one explanation.

AOT is malleable. People can be taught to engage in it. It is part of what is called cognitive style, not the result of how the brain is "hard wired". It is related to cognitive skills, but not dependent on them. Although AOT is emphasized in college, even elementary schools encourage it, if only indirectly by teaching tolerance and openness to others who are different. Several studies have shown that it can be taught by explicit instruction, especially when the idea is explained. Understanding is important, as well as some practice combined with self-evaluation. One of the most impressive of these studies was reported in 1986 by David Perkins and his collaborators. He found that myside bias was reduced by a few weeks of explicit instruction, but not by participation in other activities such as a high-school debating team. The debating team emphasized winning the argument, as opposed to discovering the best conclusion.

AOT is not a new idea. John Stuart Mill, in "On liberty", provided perhaps its most eloquent defense, using different terms. John Dewey argued that education should teaching something like it. After World War II, psychologists tried to understand support for fascism by looking at the personality traits of its supporters. It reached similar conclusions about individual differences and the role of open-mindedness, although this tradition largely neglected the potential role of education. It is on this latter point where the most recent research provides reason for optimism, even in the darkest times.

Sunday, November 13, 2016

The kind of thinking that led to Trump's election

Donald Trump's most fervent supporters consisted of elements of the Republican coalition as it has developed over the last several years. These elements are characterized by the form of their beliefs. They confidently hold beliefs without regard to the full set of relevant arguments and evidence.

Some of their beliefs are admittedly based on "faith", which means that they are held confidently despite the absence of reasons that would matter to others who do not share their faith. These include both moral and factual beliefs: homosexuality is sinful; humans did not evolve from other animals; abortion is equivalent to murder.

Other beliefs are accepted uncritically, without considering arguments on the other side, or, worse, attributing those arguments to a conspiracy. The belief that global warming is not caused by human activity is a prime example. Conspiracy theories about collusion by scientists guarantee that all arguments on the other side can be ignored. Thus, the holders of these beliefs have no way to discover that they might be incorrect, since all counter-arguments seem to come from a diabolical conspiracy. People hold these questionable beliefs with high confidence and are thus willing to impose them on others, through their political action.

These beliefs have been supported by talk radio and Fox news. In turn, these sources are supported and encouraged by more traditional Republicans. We saw this influence earlier in the "Tea Party" movement, which feeds back into the content of these sources.

More generally, these beliefs are the result of a pattern of thinking. People who hold them are lacking in what I have called actively open-minded thinking (AOT). AOT involves self-critical reflection. In order to be confident of a belief, we must look actively for reasons why it should be rejected, modified, or held with less confidence. If beliefs go unchallenged by this sort of reflection, we have no reason to know that they deserve high confidence. We have no way to distinguish those that would survive critical reflection from those that would not.

Note that AOT is not equivalent to reflection in general. Some reflection is nothing more than bolstering, looking for reasons why we were right all along, finding ways to make everything fit with our preferred conclusion. AOT concerns the direction of reflection, not just its amount. The ultimate aim is to be fair to counter-arguments.

Nor does AOT require that we never hold beliefs with high confidence. Often, an active search can find little to say for the other side, or can lead to a modified form of the original belief that is designed to deal with problems.

AOT is built in to the scientific method. Science as an institution works to look for holes in current theories. Scientists get credit for finding such holes, and ways to plug them by modifying or discarding the theory in question. For these reasons, we should have more confidence in the conclusions of science than we have in the conclusions of ways of thinking that do not involve such self-criticism. More generally, our confidence in experts should depend on how much their expertise comes from AOT.

AOT is apparently affected by cultural support. Higher education tends to support it (imperfectly, to be sure). I have been using a short questionnaire (based on a longer version devised by Keith Stanovich and his collaborators) that measures people's support for AOT. It has questions such as: "Allowing oneself to be convinced by an opposing argument is a sign of good character"; "Changing your mind is a sign of weakness" (reversed); "People should search actively for reasons why their beliefs might be wrong"; and "It is important to persevere in your beliefs even when evidence is brought to bear against them" (reversed).

I and my collaborators  have found that people who score low on this scale (opposing AOT) tend to score high on Jared Piazza's measure of belief in "divine command theory", the idea that God's laws are beyond the capacity of humans to fully understand and thus must be obeyed without question. Such a view is consistent with the sort of faith that leads people to advance moral views through politics even when they cannot argue for those views in terms that make sense to those who do not share their particular faith.

Dan Kahan has collected data with our AOT questionnaire for a representative sample of the U.S. adult population. People low in AOT tend to rate themselves as more politically conservative and more strongly Republican. (The result is described at the end of this paper) This result is unusual because the same data set finds no correlation at all between conservatism and another measure of cognitive style, the Cognitive Reflection Test (CRT), a set of tricky arithmetic problems that is supposed to measure the tendency to reflect on initial conclusions. But I think the CRT is heavily influenced by mathematical knowledge, and it also tends to assess willingness to spend time on a problem rather than a self-questioning attitude as such. It does not specifically assess the willingness to look for reasons why a pet belief might be wrong. In sum, the results for the AOT show that, in current U.S. politics, conservatives really do think differently, in ways that help us understand their attachments to weakly supported conclusions about facts and morality.

"Conservatives" in the U.S. tend not to think that AOT is a good thing. Many think it is OK to defend their beliefs as if they were possessions. Kahan did not look at Trump supporters in particular. (The data were collected a few years ago.)  But it seems likely that this negative attitude toward AOT is more common in this group than in Republicans in general. Many people who identify themselves as conservative Republicans are very clear about their acceptance of AOT, although they tend to emphasize different issues when they discuss details. And, likewise, some liberals or progressives have the same negative attitude, although not as many of them, at the moment.

On the optimistic side, my student Emlen Metz has found remarkably high levels of respect and capacity for AOT in 8th grade students across the U.S. from a variety of schools. It seems that the ideology of AOT has become widespread in the culture of schools. This is good news for the future. We need to take a page from our youth, to look to their desire to learn from each other and their hope in the future. We need to try to get everyone, journalists, teachers, and ourselves, to understand AOT more deeply.

Even those who already favor AOT can benefit from understanding how it helps with communication, by giving us a better understanding of our discussion partners.

Wednesday, November 9, 2016

Global warming: understanding, and acceptance of expert authority

David Leonhardt, in a recent column in the New York Times (on-line Nov. 9, 2016), argues that the most important response to the Republican takeover of the U.S. is to convince Republicans that global warming is real and serious. Failure to address this problem now has very long lasting effects.

Most of the effort to convince skeptics takes the form "95% of climate scientists agree ...". This sort of argument from authority is useful only if two conditions are met. First, people have to understand that the authorities in question derive their status from a superior method, namely, actively open-minded thinking. Science is inherently actively open-minded because scientists are constantly looking for reasons why conclusions might be wrong. Scientists (and many other scholars) get credit for poking holes in conclusions, even tentative conclusions of their own. As J.S. Mill and others have pointed out, such active openness to criticism is the only way to have confidence in any conclusions. Without the search for counter-arguments, you don't know if they exist.

Second, even with this understanding (which is not widespread), people may distrust a consensus view. Science works slowly. It took quite a while for people to accept Copernicus's alternative to Ptolemy's theory of the planets. Other false views have been the "conventional wisdom" in science for decades. People may suspect that the 95% of scientists are just engaging in herd behavior, not listening sufficiently to the 5% who disagree (if, indeed, they really exist).

Here I think we need to point out to skeptics that the argument from authority is not the only one. In particular, the existence of the (somewhat misnamed) greenhouse effect has been well known for over 100 years. It follows from a couple of basic principles of physics (or physical chemistry). Thus, in addition to acceptance of authority, there is also a role for understanding.

The two principles are roughly these: First, carbon dioxide (and other "greenhouse gases") absorb heat from infrared radiation, while the more basic components of the atmosphere (oxygen and nitrogen) do not. This fact can be demonstrated in table-top experiments, e.g. https://www.youtube.com/watch?v=kwtt51gvaJQ. Second, the earth's surface itself emits infrared radiation when it is warmed by the sun, which, in turn, warms the atmosphere, which then emits radiation as well, thus increasing the overall temperature of both air and land.

Some understanding of these principles leads to the conclusion that global temperature will increase as the amount of carbon dioxide increases, other things being equal. This fact was well understood by the end of the 19th century. Thus, if we understand these principles, we can see that the burden of proof now shifts to those who want to say that warming will not occur. Other things being equal, it must occur.

The extensive efforts of climate scientists have thus been mainly about determining whether these other factors are in fact equal, or whether some of them might reduce -- or increase -- global warming. It turns out that some reduce it and some increase it, leading to uncertainty about how large the effect will be, quantitatively. (This uncertainty poses risks of its own, as the effect could be larger than expected as well as smaller.) The 5% and the 95% may disagree about the relative importance of these other factors, without denying the basic facts.

My conclusion is that, if people understood the basic science, they would have some additional reason to trust the expert consensus. It might help even if they just knew that it existed.  Some attempt to explain the basic science should be part of the public argument. It should not be limited to high-school science courses.

Likewise, actively open-minded thinking itself should be understood, not just accepted on the basis of authority.