tag:blogger.com,1999:blog-72193618529629744242024-03-09T18:48:06.043-08:00Judgment misguidedComments about the academic study of judgment and decision making and its application to public policy (sometimes more one, sometimes more the other).Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.comBlogger39125tag:blogger.com,1999:blog-7219361852962974424.post-32649861277250594152023-12-03T03:37:00.000-08:002023-12-03T03:37:39.488-08:00More on why I am not a fan of pre-registration<p>This is a draft follow-up to my earlier post on prediction, accomodation, and pre-registration: https://judgmentmisguided.blogspot.com/2018/05/prediction-accommodation-and-pre.html</p><p>I argued there that some of the appeal of pre-registration is the result of a philosophical mistake, the idea that prediction of a result is better than post-hoc accomodation of the result once it is found, holding constant the fit of the result to its explanation.</p><p>Here I comment on pre-registration from the perspective of editor of Judgment and Decision Making, a journal concerned largely with applied cognitive psychology. I try to answer some common points made by the defenders of pre-registration.</p><p>1. As editor, I find myself arguing with authors who pre-registered their data analysis, when I think that their pre-registration is just wrong. Typically, the pre-reg (pre-registration document) ignores our statistics guidelines at https://jbaron.org/journal/stat.htm. For example, it proposes some sort of statistical control, or test for removable interactions. Although it is true that authors do not need to do just what they say in the pre-reg, they must still explain why they changed, and some authors still want to fully report both their pre-registered analysis and what I think is the correct one.</p><p>I don't see why pre-registration matters here. For example, one common issue is what to exclude from the main analysis. Often the pre-reg specifies what will be excluded, such as the longest 10\% of responses times, but I often judge this idea to be seriously inferior to something else, such as using a log transform. (The longest times may even reflect the most serious responding, and their outsized influence on statistics can usually be largely eliminate by transformation.) The author might argue that both should be reported because the 10\% idea was thought of beforehand. But does it matter when you think of it? If it is such an obvious alternative to using the log, then you could think of it after collecting the data. (This is related to my blog post mentioned earlier.) If the main analysis will now be based on logs, it doesn't even matter if the decision to use 10\% was thought of after finding that it yielded clearer results (p-hacking).</p><p>2. It may be argued that pre-registration encourages researchers to think ahead. It might do that, but it would be a subtle effect, as it may lead to thinking about issues that would be considered anyway.</p><p>The most common failure to think ahead is to neglect alternative explanations of an expected result. You can find that in pre-registrations as well as submitted papers. Maybe pre-registration helps a little, like a nudge. But the most common alternative explanations I see seem to be things like reversed causality (mediator vs. DV), or third-variable causality, in mediation analysis. Pre-regs sometimes propose mediation analyses without thinking of these potential problems. Another common alternative explanation is that interactions are due to scaling effects (hence "removable"). I have never see anyone think of this in advance. Most people haven't heard of this problem (despite Loftus's 1978 paper in Memory and Cognition). Nor the problem with statistical control (again pointed out decades ago, by Kahneman among many others), which they also put in pre-regs.</p><p>3. Does pre-registration protect against p-hacking anyway? Psychology papers are usually multi-study. You can pre-register one study at a time, and that is what I usually (always?) see. So you don't have to report the related studies you did that didn't work, even if you pre-registered each one, although honest reporting would do that anyway. This is a consequence of the more general problem that pre-registration does not require making public the results whether the study works or not. Unlike some clinical trials, you can pre-register a study, do it, find that the result fails to support the hypothesis tested, and put study in the file drawer. In principle, you can even pre-register two ways of doing the same study or analysis and then refer to the pre-registration that fits better when you write the paper. (I suspect that this has never happened. Unlike failing to report those studies that failed, this would probably be considered unethical. But, if a journal starts to REQUIRE pre-registration, the temptation might be much greater.)</p><p>4. What do you do to detect p-hacking, without a pre-reg? I ask whether the analysis done is reasonable or whether some alternative approach would be much better. If a reasonable analysis yields p=.035 for the main hypothesis test, this is a weak result anyway, and it doesn't matter whether it was chosen because some other reasonable analysis yielded p=.051. Weak results are often so strongly consistent with what we already know that they are still very likely to be real. If they are surprising, it is time to ask for another study. Rarely, I find that it is helpful to look at the data; this sometimes happens when the result is reasonable but the analysis looks contrived, so I wonder what is going on.</p><p>Pre-registration inhibits the very helpful process of looking at the data before deciding how to proceed with the analysis. This exploration is so much part of my own approach to research that I could not possibly pre-register anything about data analysis. For example, in determining exclusions I often look at something like the distribution of (mean log) response times for the responses to individual items. I often find a cluster of very fast responders, separate from the rest. Sometimes the subjects in these clusters give the same response to every question, or their responses are insensitive to compelling variations that ought to affect everyone. I do this before looking at the effects of removing these subjects on the final results.</p><p>5. It seems arrogant to put your own judgment ahead of the authors'.</p><p>When it comes to judging other people's papers as editor, I think that relationship between author and editor is not one of equality. I do not need to give equal weight to the author's judgment as reflected in the pre-reg, just as I do not need to give equal weight to the reviewers' opinions and my own.</p><p>When I handle a paper, I am the editor. It is my job to enforce my own standards, not to enforce some consensus in which everyone is equal. (I see no point in that when the apparent consensus produces so much junk. "Peer reviewers" are in such demand that any warm body will do. The situation is worst in grant-review panels, which usually don't have an "editor" in charge.) Some editors are better than others. There is nothing I can do about that. (Two of the best were Ray Nickerson and Frank Yates, both of whom died recently.) Journal editors are like judges in the legal system, CEOs of corporations, or deans of universities. They are given special status. We hope they live up to their status.</p><p><br /></p>Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com0tag:blogger.com,1999:blog-7219361852962974424.post-87541472191950316702023-08-08T07:49:00.000-07:002023-08-08T07:49:04.048-07:00First impressions of ChatGPT<p>Here are two examples of my first interactions with ChatGPT or some similar system. In the first, I asked it to construct a new moral dilemma for an experiment. I had in mind the bicycle dilemma discussed by R. M. Hare in "Moral thinking" (1981, section 6.2) concerning whether the actor should move another person's bicycle (a minor inconvenience for the bicycle owner) in order to park a car (to avoid a greater inconvenience of finding another parking space), I may use both now. This one has to be edited a bit, but it does the job, and I never saw anything like it before. I was impressed.</p><p>=======================================================</p><p>User</p><p>I would like a moral dilemma in which the right answer is for me to be selfish because the benefit to me is greater than the harm to someone else.</p><p>ChatGPT</p><p>Alright, here's a hypothetical scenario that might suit your request:</p><p>The Dilemma of the Scarce Scholarship</p><p>You've been working exceptionally hard throughout high school, and it's finally paid off. You have the grades, the extracurricular activities, and the teacher recommendations to qualify for the most prestigious scholarship at your dream university. This scholarship covers full tuition, accommodation, and also provides a substantial monthly stipend. It's a once-in-a-lifetime opportunity that could completely change your future.</p><p>However, you're not the only one who qualifies for this scholarship. Your best friend, Alex, also qualifies. Alex comes from a more privileged background than you. Their family could afford to send Alex to the university, albeit with some financial strain. Without the scholarship, you would have to take out substantial student loans to attend, and you might even have to give up on your dream university for a less expensive option.</p><p>One day, you stumble upon an opportunity. You find out about an obscure academic competition that the scholarship board holds in high regard. Participation in this competition could significantly boost your chances of getting the scholarship, but very few people know about it.</p><p>The moral dilemma is this: Do you tell Alex about the competition or keep it to yourself?</p><p>If you tell Alex, you keep your integrity intact, but you significantly increase the competition for the scholarship. Alex, with their greater resources, might even have a better shot at winning the competition.</p><p>If you don't tell Alex, you maximize your chances of securing the scholarship, but at the cost of acting dishonestly toward your friend.</p><p>The self-interested option (not telling Alex about the competition) might be justifiable in this case. The potential benefits for you are life-changing. Alex, although potentially disadvantaged by not knowing about the competition, would still have the opportunity to attend university without the burden of crippling debt.</p><p>================================================================</p><p>The second set resulted from a letter I got from Maximus, a company I never heard of, which apparently contracts with Medicare (the U.S. health insurance system for old people). The letter said that Maximus had been the victim of a data breach, and my records were included. They included enough information for someone to steal my identity. So I set off to find out whether the records were accurate (hoping that they were not). Simply changing my Medicare number would itself involve a fair bit of work, since many people have it, so I spent some time on it. The Medicare web site would not let me see the information it had about me. I called a phone number at Medicare and got to a human, who informed me that the letter was sent to me by mistake and my records were not part of the breach. I was so happy that I thanked him and said good bye. Later I got more upset at Maximus, so I called the number they said would reach a professional familiar with the breach. The "professional" turned out to be an automated chat, and it was no help.</p><p>Next, I used the "chat" function of Medicare itself, which kept a record of our conversation (below). I wanted to say bad things about Maximus. Below is the transcript. It now appears that corporations (Maximus) and major government agencies (Medicare) are using ChatGPT-type systems to replace humans on phone and help lines. I'm sure these things will get better, but this is not ready for prime time.</p><p>------------------- Medicare.gov Live Chat Transcript ---------------------</p><p>Live chat Date:8/5/2023</p><p>Start Time:11:14:13 am</p><p>End Time:11:28:06 am</p><p>-----------------------------------------------------------------------------</p><p>[11:14:13 am]: Thank you for contacting Medicare.gov Live Chat. </p><p>[11:14:16 am]: Jonathan I want to report what I think is a seriously misleading and deceptive letter that I received from Maximus about my information being accessed. I would prefer to write an email, but I can't find anywhere to send it. I think that whoever wrote and approved this letter should be disciplined in some way. Should I proceed? Or can you give me an email address? </p><p>[11:14:16 am]: Please wait while you are connected to an agent. </p><p>[11:14:19 am]: You are now connected with Medicare.gov Live Chat.</p><p>Thank you for contacting Medicare.gov Live Chat. My name is ulie. Can you please verify your first and last name? </p><p>[11:14:42 am]: Jonathan Jonathan Baron </p><p>[11:14:59 am]: Julie Hello, I will be happy to assist you with this today. </p><p>[11:15:30 am]: Julie Who send you the letter? </p><p>[11:16:32 am]: Jonathan At the top it said Maximus, and (on the right) CMS. The return address was in Claysburg, PA. </p><p>[11:16:58 am]: Julie </p><p>Thank you.</p><p>Let me look into this for you. </p><p>[11:17:11 am]: Jonathan 'https://www.cms.gov/newsroom/press-releases/cms-responding-data-breach-contractor' is the letter I received. </p><p>[11:19:05 am]: Julie I appreciate your patience.</p><p>You should have received a letter notifying you of a data breach. We have determined that your Medicare number was not compromised, and you don't need to request a new one. The letter provides more information about the incident and how it is being addressed. Please note that this does not have any impact on your current Medicare benefits or coverage.</p><p>[11:19:12 am]: Julie Maximus, a contractor for the Centers for Medicare &amp; Medicaid Services, recently learned that their file transfer software, called MOVEit, was compromised. They are offering 24 months of credit monitoring and other services from Experian at no cost. You can call Experian to learn more; their phone number is included in your letter. This is not available through email. </p><p>[11:23:50 am]: Jonathan This is just what the letter said. Yes, indeed, it was incorrect, and caused me to waste a lot of time. But there were other problems. Do you want to hear about those? One problem is that the further questions suggested calling a number 'staffed with professionals familiar with this incident'. In fact, the 'professional' turned out to be (pretty cleary) an AI program like ChatGPT. I treated it that way. (ChatGPT has a thick skin. It doesn't complain if you criticize it harshly.) It could not answer my questions. I hope you are not the same robot! </p><p>[11:25:59 am]: Julie Do you have additional questions regarding the letter you received? </p><p>[11:27:25 am]: Jonathan No. So clearly I am wasting my time once again. You have missed the point, but I'm not going to try to get you back on track. I wanted an email address. You are clearly not going to give me one. </p><p>[11:27:54 am]: Julie Have a nice day. </p><p>[11:28:06 am]: The chat session has ended. Thank you for contacting Medicare.gov Live Chat. Have a nice day. </p><div><br /></div>Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com0tag:blogger.com,1999:blog-7219361852962974424.post-11280521967973634272023-07-14T12:28:00.009-07:002023-07-16T03:27:28.995-07:00Diversity, noise, and merit in college admission<p>I have not studied the recent US Supreme Court decision that ended affirmative action in college admissions, but I have followed the issue in news reports and have other relevant experiences.</p><p>For a few years including 1990-92 (details lost) I was head of the committee that supervised undergraduate admissions to the School of Arts and Sciences at Penn. My main goal was to study the "predictive index", a formula used by the admissions office to predict academic achievement after admission. The index was an equally-weighted sum of the SAT aptitude test, the mean of three achievement tests (typically including math and English), and high-school class rank. Working with Frank Norman, a colleague, I discovered that the the aptitude test was essentially useless once we had the other two predictors, and we tried to get the admissions department to drop it, as described in <a href="https://www.sas.upenn.edu/~baron/sat.htm">https://www.sas.upenn.edu/~baron/sat.htm/</a>.</p><p>Around the same time, I attended a meeting of Ivy League admissions people at Harvard. This was after the "Ivy overlap" meeting was abolished by overzealous anti-trust action, as described in <a href="https://news.mit.edu/1992/history-0903">https://news.mit.edu/1992/history-0903</a>. The overlap meeting was a discussion of applicants for financial aid at more than one college in the elite group. It was designed to insure that colleges were not competing for applicants on the basis of financial aid, hence insuring that the "colluding" colleges would "admit students solely on the basis of merit and distribute their scholarship money solely on the basis of need." When this policy was in effect, Penn had a hard time funding need-blind admissions, so it limited this policy to Americans, although Harvard did not. The meeting was to discuss the situation.</p><p>There I had occasion to discuss our SAT report with the Harvard dean of admissions, William Fitzsimmons. In passing, he pointed out something that helps to explain some of the apparent irrationality of admission systems in general. He said, roughly, "We could fill the entire freshman class with students who got straight 800s [perfect scores] on all the tests." (He might have added "from India".) But, he might have gone on to say, we don't want a class that is full of academic achievers. We want variety. We want a lot of students who are satisfied with passing grades. These students may not become scholars, but they may benefit from an education that will help them and others in many different ways. And they will dilute the competitive atmosphere that would result from a focus on achievement alone. (Is that what "merit" means when people argue that merit should be the sole basis for admisison?) So we must use other criteria.</p><p>The usual way of getting the desired variety/diversity is for admissions staff to read applications and make judgments about "character" or something like that. The psychological literature on selection is fairly clear that such judgments are very poor at predicting anything, in contrast to measures like grades and test scores (like our predictive index, suitably revised), which are pretty good at predicting other grades and test scores. The main result of these "personal" judgments is to add noise, random error, to the process (unfortunataly at considerable cost, since the admissions staff are well paid, and this is a substantial part of their jobs).</p><p>In sum, current admissions policy at Harvard, Penn, and similar colleges, is not based on "merit" alone, if that is taken to mean prediction of academic achievement. Instead, "pretty good" applicants who rank high (but not at the very top) on merit are rejected so that applicants lower in merit can be admitted for the sake of diversity. These decisions about acceptance and rejection are made in ways that are noisy, similar to the results of a lottery.</p><p>The same sort of noise was introduced by the 1978 Bakke decision of the Supreme Court, which prohibited colleges from doing affirmative action by exactly the method that would have been best according to the psychology literature, which is to rank all applicants by objective criteria and then, if you want more Blacks, use the same ranking but a lower cut-off. This method would optimize the academic performance of both groups. And, in particular, it would minimize the number of affirmative-action admits who were not ready for academic work of the sort expected at the elite colleges. These students exist, and many of them suffer from failure that would not have occurred had they gone to a less demanding college. But the court demanded that students be evaluated "holistically", which served to increase noise and not do much else.</p><p>Now it seems that many colleges intend to do more holistic admissions in hopes that they could find minority students who deserved admission. Note that this policy is fully consistent with Fitzsimmons' desire for diversity. A big problem is that, by definition, elite college are those that teach courses at a fairly high level, for example, those that use my textbook "Thinking and Deciding" in undergraduate courses. Students with weak numeracy skills will have trouble. Thus, it is still important to select students who can do the work. Failures in courses and in graduation itself are bad outcomes and should not be seen as a "cost we must pay" for diversity.</p><p>Thus the big trouble with holistic judgment as a way of promoting diversity is that it will lead to admission of too many students who are predictably not ready for academic work at the level required. These students are "cannon fodder" for the supposedly enlightened policy that admits them.</p><p>Ideally, colleges that use any sort of holistic criteria should also use the best statistical predictors to eliminate applicants who are high risk for failure, thus setting a lower bound to be applied to the students already selected by holistic criteria. This won't be perfect, but it seems worth a try. The Supreme Court, loose cannon that it is, might find that it is unconstitutional, just as it prohibited the optimal use of predictive indices for affirmative action.</p><p>When I was a student at Harvard 1962-66, it was just beginning a sort of affirmative action. My first-year roommate was Black. Charlie was a friend from Andover, an elite prep school that we both attended. We both would have been admitted if admission was base fully on merit, but Harvard often rejected pretty-good students like Charlie, or me, just to make room for those less qualified. But he was Black and clearly capable of doing the work. So they took him. (And so did Harvard Law School, 4 years later.)</p><p>And I was a legacy. My father, class of '37, was admitted at a time when Harvard was trying not to admit too many Jews (a situation that seems analogous to its attitude toward Asians today). My son also went there, and was thus 3d generation and from a minority that was once discriminated against. Another roommate of mine after the first year was 7th generation, and at least one of his daughters also went to Harvard. Sure, this is a way of preserving privilege, and it has the effect of admitting not-so-great students, thus increasing diversity of achievement, but probably without as much risk of failure as other ways of doing this, such as sports admissions, and with less than average need for financial assistance. Family traditions are also important to some families. Legacy preference for students who need it would increase diversity simply because some of these students will not be competitive high achievers. Many other legacy students would probably be admitted anyway.</p><p>My father's situation seems similar to the situation of many "Asian" students today. Harvard in particular does seems not to want "too many Asians," and that outcome would result from selection by merit alone. Yet, since merit is still part of the story, Asians who are admitted have higher test scores and high-school grades, which means that they are predicted to get better grades in college. That happens. We do not need some psychological explanation of why Asian student are relatively high achievers in college. The same used to be true of Jews, but now I think that colleges are no longer biased against them. (A small literature on "calibration" and "predictive bias" looks at the possibility of determining systemic or intentional bias on the basis of performance after selection. I think that it would show bias against Asians in some colleges today.)</p><p>In 1962, women were admitted to Radcliffe College, which was much smaller than Harvard College. As a result, the cut-off for admission of women was higher, and "Cliffies" got better grades and often dominated class discussions. Women are not generally that much smarter than men. We had a biased sample.</p><p>On the other side, affirmative action for under-represented minorities means that they are admitted with lower test scores and high-school grades. This fact can (fully or partially) explain why they get lower grades once admitted.</p><p>In sum, merit still matters, and it still predicts achievement. When students are more selected, they will achieve more on the average. Students selected with less attention to scores and grades will have lower achievement in college, although some ways of doing this seem better than others.</p><p>Some affirmative action for under-represented minorities seems reasonable (to me). Cultural diversity, which in the U.S. is correlated with "racial" diversity, is important for the education of all students. The increase in diversity is targeted and predictable, not simply random. But this is now off the table.</p><p>One response to the recent court ruling is to aim at diversity in ability to pay, by applying affirmative action to poor students (coupled with attempts to recruit them), those students who would ordinarily require maximum financial aid. Such a policy would deplete funds for financial aid, possibly increasing the difficulty of maintaining need-blind admissions for other students who required moderate amounts of aid. However, a policy favoring legacies would reduce the need for financial aid. Keeping legacy admission might help pay for more poor students.</p><p>Affirmative action of under-represented minorities, including the poor, must be done in combination with a strong effort to avoid predictable failure after admission. There is some optimum amount of affirmative action, and some colleges may have gone beyond it for minorities, although not for the poor.</p><p>More generally, admission to most selective colleges has never been done on the basis of merit alone. If other criteria are used, it is not unreasonable to know what they are, rather than relying on noise alone.</p><p><br /></p><p><br /></p>Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com0tag:blogger.com,1999:blog-7219361852962974424.post-65180936760051507172023-07-12T06:00:00.000-07:002023-07-12T06:00:56.610-07:00Cluster munitions for Ukraine<p> The New York Times editorial of July 10, "The flawed moral logic of sending cluster munitions to Ukraine," opposed the U.S. decision to do just that. It tried to rebut some of the arguments made in favor of the plan, but it missed at least one, the fact that most of the area involved would already be littered with mines and unexploded cluster munitions used (extensively) by Russia, so the additional care required to try to avoid them later would already be required.</p><p>The editorial, the statements of governments opposed to the plan, and some of the published letters to the Times, seemed to follow the principle that these munitions are morally wrong, whatever the consequences. Such absolute principles are, in the sense I have used (e.g., Baron and Spranca, 1997) protected values. Ideological adherence to such values surely has considerable political influence. These commitments may be held unreflectively. When people are forced to confront specific situations where the principle conflicts with some other principle, such as avoiding terrible consequences, they often admit that their principle is not absolute after all (Baron and Leshner, 2000).</p><p>In "Rules of War and Moral Reasoning" (1972, http://www.jstor.org/stable/2264969), R. M. Hare criticizes the "absolutist" deontological views of Thomas Nagel, who advocated strict adherence to accepted rules, such as those prohibiting the use of poison gas, or attacks on the Red Cross.</p><p>"The defect in most deontologica theories ... is that they have no coherent rational account to give to any level of moral thought above that of the man who knows some good moral principles and sticks to them. He is a very admirable person, and to question his principles ... is indeed to 'show a corrupt mind'." However, to achieve such an account, "we have to adopt a 'two-level' approach, ... to recognize that the simple principles of the deontologist, important as they are, have their place at the level of character formation." Although we should be careful about violating principles that have been drilled into us (including those inculcated by military training), we need to be willing to override them on the basis of a higher level of analysis, that is, to make exceptions when they clearly lead to worse consequences than some alternative, even if our attachment to the broken rules leads us to feel guilty (just as the failure to prevent terrible consequences can also lead to guilt feelings).</p><p>Absolute rules represent a hardening of moral intuitions that are usually sufficient but should sometimes be overridden by more reflective reasoning, as suggested by Greene ("Moral tribes") and others. The opponents of cluster munitions seem to illustrate these hardened intuitions, which are protected values. Once having decided that cluster munitions are morally wrong, whatever the consequences, some opponents then engage in belief overkill, finding ways to ignore relevant facts on the other side, or to exaggerate the probability of harmful consequences resulting from action.</p><p><br /></p><p><br /></p>Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com0tag:blogger.com,1999:blog-7219361852962974424.post-10133554375481904462023-05-03T16:11:00.001-07:002023-05-03T16:41:55.604-07:00An example of actively open-minded thinking <p> Tucker Carlson January 7, 2021 — 04:18:04 PM UTC</p><p>"A couple of weeks ago, I was watching video of people fighting on the street in Washington. A group of Trump guys surrounded an Antifa kid and started pounding the living shit out of him. It was three against one, at least. Jumping a guy like that is dishonorable obviously. It’s not how white men fight. Yet suddenly I found myself rooting for the mob against the man, hoping they’d hit him harder, kill him. I really wanted them to hurt the kid. I could taste it. Then somewhere deep in my brain, an alarm went off: this isn’t good for me. I’m becoming something I don’t want to be. The Antifa creep is a human being. Much as I despise what he says and does, much as I’m sure I’d hate him personally if I knew him, I shouldn’t gloat over his suffering. I should be bothered by it. I should remember that somewhere somebody probably loves this kid, and would be crushed if he was killed. If I don’t care about those things, if I reduce people to their politics, how am I better than he is?"</p><p>Quoted in New York Times, May 2, 2023 (and elsewhere)</p><p>https://www.nytimes.com/2023/05/02/business/media/tucker-carlson-text-message-white-men.html</p><p><br /></p>Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com0tag:blogger.com,1999:blog-7219361852962974424.post-14948674073878907062023-04-08T09:18:00.000-07:002023-04-08T09:18:26.068-07:00Consensus clouds<p> I keep wanting to talk about a concept that doesn't seem to have a name. I propose "consensus cloud", but maybe someone can tell me that a name already exists.</p><p>The idea is that a group of people have an apparent consensus about some belief or set of beliefs. These belief sets often have their own names, such as "woke" or "MAGA". They include various forms of nationalism, such as the belief in the existence of a "Russian people". They include religious ideologies, such as the ultra-orthodox Jews in Israel, pro-Hindu politics in India, Islamist politics, and the "religious right" in the U.S.</p><p>These belief sets differ in that some of them are limited to the groups that hold the beliefs. The ultra-orthodox do not seem to care whether the rest of the world adopts their religion; they just want state support for their own communities. Other belief sets apply to everyone, such as woke ideology or, by definition, various forms of evangelical Christianity.</p><p>All these belief sets are maintained as within-group social norms. Group members want others to agree and are willing to take action to promote agreement.</p><p>All these cases also benefit from "pluralistic ignorance". Believers think that the number of co-believers is higher than it really is, in part becuase doubters do not make themselves known, thus avoiding the censure that would result from enforcement of a social norm.</p><p>Pluralistic ignorance is itself abetted by control of an "information space", such as a government that systematically discourages or punishes dissent, as is happening now in Russia and China. Thus a kind of stability is achieved. Many Russians seem to believe that Ukraine is part of some sort of Russian essence and thus should be a part of Russia.</p><p>Belief sets often include beliefs about why outsiders do not agree. For example, some Russians believe that resistance to their claim to Ukraine is the result of western nations efforts to isolate Russia and hamper its development, and its associated propaganda. Control of the information space is not necessary if the group ideology can define outsiders as part of a conspiracy against it (as Russia does). MAGA believers see such things as public heath mandates and climate protection to be the result of infiltration of academia and government (the "deep state") by leftist ideologues who produce pseudo-science to support their politics. The same sort of rationale often allows people with delusions to dismiss counter-arguments as coming from part of the conspiracy against them.</p><p>The beliefs of interest seem to descend on people like a cloud that becoms a fog. People cannot see outside of it.</p><p>Of course, some people do consider alternatives and question the relevant beliefs. We might expect individual differences in adherence to any sort of consensus cloud. Those who are prone toward myside bias as a trait are more likely to join a consensus, and, therefore, those are endorse the standards of actively open-minded thinking are more resistant. Similarly, those who are prone to accept conspiracy theories are more accepting, especially when opponents are seen as part of a conspiracy.</p><p>Possibly the trait of "intellectual humility" can work both ways, as it could make people less accepting of their own conclusions and more willing to listen to others. Its effect thus depends on which conclusions and which other people are affected. Intellectual humility is not the same as actively open-minded thinking, which implies that humility is needed only when a conclusion is the result of little thinking by anyone, or the result of poor thinking.</p><p><br /></p>Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com0tag:blogger.com,1999:blog-7219361852962974424.post-72983799061555910822021-09-05T15:57:00.001-07:002022-02-28T12:37:34.816-08:00Vaccines for children<p>The U.S. is looking for trouble. Schools are opening and no children below 12 are vaccinated (except perhaps the few in clinical trials, which I hope are happening). These kids could spread COVID-19 fairly quickly. So here is an idea.</p><p>Approve emergency-use for 11-year-olds. Meanwhile monitor ages 11 and 12 for medical conditions that could be side effects. Continue to do this. Even an increase in some condition in age 11 after approval might indicate a special problem.</p><p>Then move to age 10 in a couple of weeks. And so on. This is something like a regression discontinuity design, but some of it could be based solely on comparisons after approval.<br /></p><p>Apparently the FDA sees its duty as only toward the individual, not toward others that the individual might infect. But date from a procedure like this might give it enough information to act, in successive steps. Something is better than nothing.</p><p>Added 2/28/2022: Vaccines for children are still a problem in the U.S. Apparently the doses were too low, and the effects were either very small (ages 3 and 4, not approved yet) or wearing off quickly (ages 5-11, approved). So the same idea could be done with increased dosage. Now it seems we will have wait more months for more trials.</p><p>At this point, it seems that most people want to get back to "normal", despite the existence of continued spread that could be significantly slowed by greater and more effective vaccination. A return to normal would be more reasonable if we didn't have to worry about long covid, which is apparently a debilitating chronic condition, something like that caused by Lyme disease. As others remove their masks and continue to resist vaccination, some of us are just going to get better masks and continue to avoid indoor restaurants and any kind of public gathering, even though we, too, were hoping to some sort of return to normal.</p><p><br /></p>Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com0tag:blogger.com,1999:blog-7219361852962974424.post-75549631378097051192018-05-10T13:17:00.000-07:002018-05-10T13:17:13.154-07:00Prediction, accommodation and pre-registration<div zoompage-fontsize="20">
Some scientists think that confirming a prediction made before data collection is more convincing than explaining the data after they arrive. I think that this belief is one source (among others) of support for the practice of pre-registration, in which authors publish a plan for an experiment they intend to do, explaining the hypothesis, methods, and data analysis in advance.<br /><br />Paul Horwich ("Probability and evidence", 1982, pp. 108-117) </div>
<div zoompage-fontsize="20">
has a <a href="http://finzi.psych.upenn.edu/~baron/misc/horwich.pdf" zoompage-fontsize="20">complex discussion </a>of this issue, under the heading "Prediction vs. accommodation", but I want to try to provide a more intuitive account of why I do not think that it matters.<br /><br />Let me take an example based very loosely on my own research. Suppose I want to study evaluative judgment of acts vs. omissions. I got interested in this because of vaccine resistance in the real world. It seemed likely that this resistance resulted from a bias against potentially harmful acts (vaccination, with side effects) compared to potentially more harmful omissions (non-vaccination, with exposure to a natural disease). I thought that this bias was a sort of heuristic, in which people tend to think of action as inherently risky, regardless of what they are told about its risks.<br /><br />I designed an experiment to examine judgments of acts and omissions, using a variety of scenarios. Some of the scenarios involved money rather than disease risk, so it was possible to include items involving monetary gains as well as losses. I expected that subjects would just be biased against omissions, period.<br /><br />When I got the data, I noticed something unexpected. Yes, I found the expected bias for vaccination scenarios and for money scenarios that involved losses. In the money scenarios, subjects evaluated action leading to a loss as somewhat worse than omission (doing nothing) leading to the same loss with a higher probability. But when I looked at gains, the results were reversed. Subjects were biased toward action, not omission. They evaluated action leading to a chance on winning some money more highly than omission leading to a somewhat larger chance of winning the same amount.<br /><br />It was easy to explain my results in hindsight. Action, as opposed to omission, simply amplified the effect on choice of whatever outcome it produced. Subjects were not generally biased against action: it depended on whether the outcomes of action were good or bad. The association with action served to focus attention on the effect of the action, so the action looked better if its outcome was good, and worse if its outcome was bad. (In real life, the amplification effect exists, along with the omission heuristic, but it was not unexpected, as I knew that Janet Landman had already reported it with a different dependent measure. And this "one experiment" is actually a conflation of several done in collaboration with Ilana Ritov.)<br /><br />Suppose I had stopped there and reported what I expected and what I found. Compare this to a case in which I predicted (expected) the amplification effect. Should your belief in the existence of the amplification effect depend on whether I predicted it or not (all else being equal)?<br /><br />Note that these two cases can be compared with a third case, in which I report the result for gains but falsely claim that I predicted it, when, actually, I expected a different result. This is the sort of thing that pre-registration of research hypotheses prevents. But, if the two cases just mentioned do not differ in the credibility of amplification as an explanation, such "cheating" would simply be a type of puffery, cheap talk designed to influence the reader without conveying any truly relevant information, like many other things that authors do to make their conclusions sound good, such as baldly asserting that the result is very important. The simple way for editors to avoid such puffery is to change all statements about prediction to statements of questions that the research might answer. Thus "I predicted a general bias toward omission" would be edited to say "I asked whether the bias toward omission was general, for gains as well as losses."<br /><br />Let me now return to the original question of whether prior prediction matters. In both cases, we have two plausible explanations: a bias against omission, and an amplification effect. What determines the credibility of the amplification hypothesis? Of course, it fits the data, but that helps equally regardless of the prediction. Some doubt may remain, whether I predicted the result or not.<br /><br />The other determinant of credibility of an explanation is its plausibility, compared to the plausibility of alternatives. In a well written paper, I would give my reasons for thinking that various explanations are plausible. Readers would also have their own reasons. Above I suggested the reasons for each: a heuristic against action; and an attention effect favoring actions. In a paper, I would try to spell these out "with four-part harmony" (to steal a line from Arlo Guthrie), with citations of related work (such as the "feature-positive effect"), and so on.<br /><br />Should it then add anything that I predicted the result that I got? The answer to this depends on whether my prediction provides additional evidence for this explanation, that is, additional reason to believe it, aside from everything you now know about its plausibility. But, if I have done my job as a writer, I have told you everything I know that is relevant to making my final explanation (amplification) plausible. The fact that I predicted it provides additional relevant information only if I am withholding something relevant that I know. I have no incentive to do that, unless it is somewhat embarrassing. Authors might be embarrassed to say that they made the prediction because God came to them in a dream and told them what was true. Or, more realistically, "I predicted this result intuitively, for reasons I cannot explain, but you should take this seriously because my intuition is very good." As a reader, you are not going to buy that.<br /><br />In some research, the favored explanation seems pretty implausible, even though it is consistent with the data presented, despite the efforts of the author to convince us that it is plausible. These cases include some of the "gee whiz" studies in social psychology that raise questions about replicability, but also some of the research in which a precise mathematical model fits the data surprisingly well but otherwise seems to come out of the blue sky. These cases of low plausibility are the ones where claims that the results were predicted (e.g., in a pre-registration) are thought to be most relevant.<br /><br />For example, suppose I found no significant "omission bias" overall but did find it if I restrict the sample to those who identify themselves as Protestant Christian. I supported this restriction with (highly selected) quotations from Protestant texts, thus explaining the result in terms of religious doctrine. You would rightly be suspicious. You would (rightly) suspect that you could find just as many quotations as I found to support the conclusion that Protestant doctrine emphasized sins of omission as well as sins of commission, and that other religions were no different. Would it help convince you of the reality of my effect if you knew that I predicted it but didn't tell you any more about why? You might just think that I was a little nuts, and, well, lucky.<br /><br />Pre-registration thus does not solve the problem posed by implausible explanations. Of course they might be true, despite being implausible, but that must be established later. What matters in making an explanation legitimately credible are, first, its fit with the data (compared to alternative explanations) and, second, its fit with other things that we know (again, compared to alternatives). The order in which a researcher thought of things, by itself, provides no additional relevant information.<br /><br />Going beyond my restatement of Horwich's arguments, analogous reasoning applies to data analysis. One of the nasty things that researchers do is fiddle with their data until they get the result they want. For example, I might fail to find a significant difference in the mean ratings of acts and omissions, but I might find a difference using the maximum rating given by each subject to omissions and to actions, across several scenarios. Pre-registration avoids this fiddling, if researchers follow their pre-registered plan. Doing this, however, discourages the researcher from making reasonable accommodation to the data as they are, such as eliminating unanticipated but nonsensical responses, or transforming data that are turn out to be highly skewed.<br /><br />But note that many of the statistical options that are used for such p-hacking are ones that do not naturally fit the data very well. Again, it is possible to make up a story about why they do fit the data, but such stories usually tend to be unconvincing, just like the example of Protestantism described above. Thus, data analysis, like explanations, must be "plausible" in order to be convincing.<br /><br /></div>
Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com1tag:blogger.com,1999:blog-7219361852962974424.post-35819597080677213352017-12-25T06:53:00.002-08:002017-12-25T06:57:08.751-08:00Replication reservationsReplication of previously reported studies is sometimes useful or even necessary. Drug companies often try to replicate published research before investing a great deal of money in drug development based on that research. Ordinary academic researchers often want to examine more closely some published result, so they often include a replication of that result in a larger design, or just try to see if they can get the effect before they proceed to make modifications. Failures to replicate are often publishable (e.g., Gong, M., & Baron, J. The generality of the emotion effect on magnitude sensitivity. Journal of Economic Psychology, 32, 17–24, 2011), especially when several failures are included in a meta-analysis (e.g.,<a href="http://journal.sjdm.org/14/14321/jdm14321.html"> http://journal.sjdm.org/14/14321/jdm14321.html</a>). Finally, people may try to replicate a study when they disagree with its conclusions, possibly because of other theoretical or empirical work they have done.<br />
<br />
Researchers are now spending time trying to replicate research studies in the absence of such purposes. In one project, some students are in the process of trying to replicate most of the papers published in Judgment and Decision Making, the journal I have edited since 2006 (<a href="https://osf.io/d7za8/">https://osf.io/d7za8/</a>). Let me explain why this bothers me.<br />
<br />
First, these projects take time and money that could be spent elsewhere. The alternatives might be more worthwhile, but of course this depends on what they are.<br />
<br />
Second, if you want to question a study's conclusions, it is often easier to find a problem with the data analysis or method of the original study. A large proportion of papers published in psychology (varying from field to field) have flaws that can be discovered this way. Many of these flaws are listed in <a href="http://journal.sjdm.org/stat.htm">http://journal.sjdm.org/stat.htm</a>. It is possible to publish papers that do nothing but "take down" another published paper, especially if a correct re-analysis of the data yields a conclusion contradicting the original one.<br />
<br />
Third, complete replication of a flawed study often succeeds quite well, because it replicates the flaws. A recent paper in the Journal of Personality and Social Psychology (Gawronski et al., 2017. Consequences, norms, and generalized inaction in moral dilemmas: The CNI model of moral decision-making, 113: 343-376) replicated every study in the paper itself. The replication involved new subjects but not new stimuli, but the data analysis ignored variations among the stimuli in the size and direction of the effects of interest (and other methodological problems).<br />
<br />
Fourth, what do we conclude when a study does not replicate? Fraud? Dishonesty in reporting? Selective reporting? Luck? Sometimes these explanations can be detected by looking at the data (e.g. <a href="http://retractionwatch.com/2013/09/10/real-problems-with-retracted-shame-and-money-paper-revealed/#more-15597">http://retractionwatch.com/2013/09/10/real-problems-with-retracted-shame-and-money-paper-revealed/#more-15597</a>). And none of them can be inferred from a failure to replicate. So what is the point? Is it to scare journal editors into accepting papers only when they have very clear results that do not challenge existing theories or claims?<br />
<br />
Blanket replication of every study is a costly way to provide incentives for editors. Perhaps these "replication factors" for journals are an antidote to the poison of "impact factors". Impact factors encourage publication of surprising results that will get news coverage, and will need to be cited, just because they are surprising. But the very fact that they are surprising increases the probability that something is wrong with them. A "replication index" will discourage publication of such papers. But it will also encourage publication of papers that go to excess to replicate studies within the paper, use large samples of subjects, and, in general, cost a lot of money. This will thus tend to drive out of the field those who are not on the big-grant gravy train (or who are not in schools that provide them with generous research funding). It is better for editors to ignore both concerns.<br />
<br />
Fifth, I think that some good studies are unlikely to replicate. I try to publish them anyway. One general category consists of studies that pit two effects against each other, only one of which is interesting. An example is the "polarization effect" of Lord, Ross and Lepper (1979): subjects who opposed or favored capital punishment were presented with two studies, one showing that it deterred serious crimes and the other showing that it did not deter; both groups became more convinced of their original position, because they found ways to dismiss the study that disagreed with it. This result has in fact been replicated, but other attempts to find polarization have failed. The opposite effect is that presenting people with conflicting evidence moves them toward a more moderate position. In order for the polarization effect to "win", it must be strong enough to overcome this rational tendency toward moderation. The conditions for this to happen are surely idosyncratic. The interesting thing is that it happens at all. If the original study is honestly reported and shows a clear effect, then it does happen.<br />
<br />
<br />
<br />
Another example is a study recently published in Judgment and Decision Making (Bruni and Tufano. The value of vulnerability: The transformative capacity of risky trust, 12, 408-414, 2017). The finding of interest was that people who made themselves "vulnerable", by showing that they had trusted someone who had previously been untrustworthy, evoked more trustworthy behavior in trustees who knew of their vulnerability. Again, this result must be strong enough to counter an opposite effect: these vulnerable people could also be seen as suckers, ripe for exploitation. I suspect that this result will not replicate, but I also think it is real. (I examined the data quite carefully.) It may well depend on details of the sample of subjects, the language, and so on. This is not going to help the "replicability index" of the journal (or the impact factor, for that matter, as it is quite a complex study), but I don't care, and I shouldn't care.<br />
<br />
Of course, other important studies simply cannot be replicated, because they involve samples of attitudes in a given time and place, e.g., studies of the determinants of political attitudes, the spread of an epidemic, or the structure of an earthquake. What often can be done instead is to look at the data.<br />
<br />
In my view, the problem is not so much "replicability" but rather "credibility". Replications will be done when they are worth doing for other reasons. But for general credibility checking, it is probably more efficient to look at the data and the methods. To smooth the path for both replication and examination of data, journals should welcome replications (with either result when the original result is in doubt) and they should require publication of data whenever possible.<br />
<br />Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com0tag:blogger.com,1999:blog-7219361852962974424.post-11947574402016281382017-02-28T09:42:00.000-08:002017-02-28T09:42:28.374-08:00Explanations of deontological responses to moral dilemmasHundreds of experiments have now shown, in various ways, that responses to moral dilemmas often follow deontological rules rather than utilitarian theory. Deontological rules are rules that indicate whether some category of actions is required, permissible, or forbidden. Utilitarianism says that the best choice among those under consideration is that one that does the most expected good for all those affected. For example, utilitarianism implies that it is better to kill one person to save five others than not to kill (other things being equal), while some deontological rule may say that active killing is forbidden, whatever the consequences.<br /><br />In many of these experiments, deontological responses (DRs) seem to be equivalent to responses that demonstrate cognitive biases in non-moral situations. For example, the omission bias favors harms of omission over less harmful harms caused by acts, in both moral and non-moral situations (Ritov & Baron, 1990). This similarity suggests that the DRs arise from some sort of error, or poor thinking. Much evidence indicates that the cognitive processes supporting moral and non-moral judgments are largely the same (e.g., Greene, 2007). If this is true, the question arises of what sort of thinking is involved, and when it occurs. Several (mutually consistent) possibilities have been suggested:<br /><br />1. Dual-system theory in its simplest form ("default interventionist" or "sequential") says that DRs arise largely as an immediate intuitive response to a dilemma presented in an experiment, once the dilemma is understood. Then, sometimes, the subject may question the initial intuition and wind up giving the utilitarian response as a result of a second step of reflective thought. The same two-step sequences has been argued to account for many other errors in reasoning, including errors in arithmetic, problem solving, and logic. By this view, the cognitive problem that produces DRs is a failure to check, a failure to get to the second step before responding. This dual-system view has been popularized by Daniel Kahneman in his book "Thinking, fast and slow". I have provided evidence that it is largely incorrect (Baron & Gürçay, 2016).<br /><br />2. Very similar to this sequential dual-system theory, but different, is the theory of actively open-minded thinking (AOT; Baron, 1995). AOT begins from a view of thinking as search and inference. We search for possible answers to the question at hand, arguments or evidence for or against one possible answer or another, and criteria or values to apply when we evaluate the relative strengths of the answers in view of the arguments at hand. AOT avoids errors in thinking by searching for alternative possibilities, and for arguments and goals that might lead to a higher evaluation of possibile answers other than those that are already strong. By this view, the main source of errors is that thinking is insufficiently self-critical; the thinker looks for support for possibilities that are already strong and fails to look for support for alternatives. In the case of moral dilemmas, the DRs would be those that are already strong at the outset of thinking and would not be subject to sufficient questioning, even though additional thinking may proceed to bolster these responses. The main difference between this view and the sequential dual-system view is that AOT is concerned with the direction of thinking, not the extent of it, although of course there must be some minimal extent if self-criticism is to occur. AOT also defines direction as a continuous quantity, so it does not assume all-or-none "reflection or no reflection". By this account, utilitarian and deontological responses need not differ in the amount of time or effort required for them. Bolstering and questioning need not differ in either direction, in their processing demands.<br /><br />3. A developmental view extends the AOT view to what happens outside of the experiment (Baron, 2011). Moral principles develop over many years, and they may change as a result of questioning and external challenges. DRs may arise early in development, but that may also depend on the child's environment, how morality is taught. Reflection may lead to increasingly utilitarian views as people question the justification of DRs, especially in cases where following these DRs leads to obviously harmful outcomes. When subjects are faced with moral dilemmas in experiments, they largely apply the principles that they have previously developed, which may be utilitarian, deontological or (most often) both.<br /><br />4. We can replace "development of the individual" with "social evolution of culture" (Baron, in press). Historically, morality may not have been distinguished from formal law until relatively recently. Law takes the form of DRs. Cultural views persist, historically, even when some people have replaced them with other ways of thinking. Kohlberg has suggest that this sequence happens in development, where the distinction between morality and law is made fairly late. Thus, the course of individual development may to some extent recapitulate the history of cultures.<br /><br />These alternatives have somewhat different implications for the question of how to make people more utilitarian, if that is what we want to do. (I do.) But the implications are not that different. A view that is consistent with all of them is to emphasize reflective moral education, presenting arguments for and against utilitarian solutions, and encouraging students to think of such arguments themselves (Baron, 1990).<br /><br />Recently I and others have written several articles criticizing the sequential dual-system view of moral judgment and other tasks, such as problem solving in logic and mathematics (e.g., Baron & Gürçay, 2016; Pennycook et al., 2014). I think it is apparent that, at least in the moral domain, the role of different mechanisms is not a big deal. All these views are consistent with the more general claim that DRs can be understood as errors, and that they need not be seen as "hard wired", but, rather, malleable.<br /><br />References<br /><br />Baron, J. (1990). Thinking about consequences. Journal of Moral Education, 19, 77–87.<br /><br />Baron, J. (1995). Myside bias in thinking about abortion. Thinking and Reasoning, 1, 221–235.<br /><br />Baron, J. (2011). Where do non-utilitarian moral rules come from? In J. I. Krueger and E. T. Higgins (Eds.) Social judgment and decision making, pp. 261–278. New York: Psychology Press.<br /><br />Baron, J. (in press). Utilitarian vs. deontological reasoning: method, results, and theory. In J.-F. Bonnefon & B. Trémolière (forthcoming). Moral inferences. Hove, UK: Psychology Press. <br /><br />Baron, J. & Gürçay, B. (2016). A meta-analysis of response-time tests of the sequential two-systems model of moral judgment. Memory and Cognition. doi:10.3758/s13421-016-0686-8<br /><br />Greene, J. D. (2007). The secret joke of Kant’s soul, in W. Sinnott-Armstrong, Ed., Moral psychology, Vol. 3: The neuroscience of morality: Emotion, disease, and development, pp. 36–79. MIT Press, Cambridge, MA.<br /><br />Pennycook, G., Trippas, D., Handley, S. J., & Thompson, V. A. (2014). Base-rates: Both neglected and intuitive. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40, 544--554.<br /><br />Ritov, I., & Baron, J. (1990). Reluctance to vaccinate: omission bias and ambiguity. Journal of Behavioral Decision Making, 3, 263–277.<br /><br />Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com0tag:blogger.com,1999:blog-7219361852962974424.post-25864652854325583652017-02-26T07:29:00.002-08:002017-02-26T07:29:34.052-08:00Two posts on climate and one on health insuranceThe editors of <a href="http://www.regblog.org/">RegBlog</a> have accepted three of my recent posts. Rather than duplicate them here (which I am now allowed to do), I am instead making links to them:<br />
<br />
<a href="http://www.regblog.org/2017/01/17/baron-geographic-boundaries-social-cost-carbon/">How geographic boundaries determine the social cost of carbon;</a><br />
<br />
<a href="http://www.regblog.org/2017/01/18/baron-discount-rate-social-cost-carbon/">The discount rate for the social cost of carbon</a>;<br />
<br />
<a href="http://www.regblog.org/2017/02/23/baron-justifying-health-insurance/">Justifying health insurance</a>.<br />
<br />
All of these are philosophical comments about regulatory issues that are likely to be addressed by the Trump administration, the U.S. congress, and possibly the courts. But the issues will persist.<br />
<br />Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com0tag:blogger.com,1999:blog-7219361852962974424.post-37637798121646010842016-12-23T12:39:00.003-08:002016-12-23T12:39:56.641-08:00More on AOT and the election (rejected op-ed)The people who voted for Donald Trump were, like those who voted for Hillary Clinton, a varied lot. Some were long-standing Republicans who had doubts about Trump but even more doubts about the idea of another Democratic president. Others were upset about some particular policy that has, in fact, hurt them. But it seems that a great many were older white Christians who had no college education. When I read what some of these voters said about their choice, I find much of it, to oversimplify just a little, false. Something seemed odd about the way they came to hold beliefs with apparent high confidence when these beliefs fly in the face of both good evidence and expert opinion.<br /><br />If my perception is correct, then I think that psychology can both explain part of what they are doing and provide an optimistic solution for the future, if only the long-term future. Specifically, they are not thinking very well, and the kind of thinking they are not doing is influenced by schooling and culture. I call it actively open-minded thinking, or AOT. The purpose of AOT is to avoid myside bias (also called confirmation bias). Myside bias is a tendency to think in a way that favors pet beliefs: beliefs based on intuition alone, beliefs that one wants to have, or beliefs that have resulted from indoctrination. AOT counteracts myside bias by being open to alternatives and fair in the use of evidence and arguments. Beyond openness and fairness, AOT looks actively for alternatives and reasons why pet beliefs might be wrong.<br /><br />AOT is the only way to have justified confidence in our beliefs. If we do not check them in this way, we cannot distinguish beliefs that deserve our confidence from those that do not. If we impose such untested beliefs on others through our political behavior, we risk hurting them, in ways that deserve their censure. We are being poor citizens.<br /><br />AOT is also the basis of expertise of the sort we can trust. Scientists deserve our attention because the very nature of science, as a group activity, is to look for alternatives and evidence. Scientists get credit for poking holes in conventional scientific wisdom, leading to gradual improvement (and sometimes not so gradual). Good journalists, by checking their facts before going public, do the same. That is the difference between, well, the New York Times and fake news. None of these institutions is perfect, but they are more likely to reach good conclusions than they would reach without critical reflection. If people understand AOT, they will understand how to evaluate sources. They do not have to think through everything by themselves, because they can trust others who know how to think well, and they can recognize when this is being done. It is thus important for people to understand AOT and why it helps, as well as being able to do it themselves, when needed.<br /><br />People differ enormously in their understanding of AOT and in their tendency to do it, as opposed to displaying myside bias. For example, one manifestation of myside bias is "belief overkill", the tendency to think that all arguments point in the same direction, for example, the belief that genetically modified food has no benefits, only risks, or that it has no risks, only benefits. I have found that some people display this bias at almost every opportunity, while others do not show the effect at all -- they can recognize the down side of their opinion while still believing that the up side predominates. Similar individual differences are found in other manifestations of AOT.<br /><br />Myside bias is thus not a necessary part of the human condition. Some cultural practices, such as the discouragement of certain kinds of curiosity in children, may even lead people to oppose AOT when they would otherwise do it naturally. Education in the U.S., especially college education, encourages AOT. Students who write papers that neglect arguments on the other side of their conclusion do not usually get A's, even if they write well otherwise. AOT is part of what it means to be a responsible scholar and a good student. People who oppose AOT may also stay away from higher education because they do not want to think this way. The correlation between Trump support and lack of higher education may have more than one explanation.<br /><br />AOT is malleable. People can be taught to engage in it. It is part of what is called cognitive style, not the result of how the brain is "hard wired". It is related to cognitive skills, but not dependent on them. Although AOT is emphasized in college, even elementary schools encourage it, if only indirectly by teaching tolerance and openness to others who are different. Several studies have shown that it can be taught by explicit instruction, especially when the idea is explained. Understanding is important, as well as some practice combined with self-evaluation. One of the most impressive of these studies was reported in 1986 by David Perkins and his collaborators. He found that myside bias was reduced by a few weeks of explicit instruction, but not by participation in other activities such as a high-school debating team. The debating team emphasized winning the argument, as opposed to discovering the best conclusion.<br /><br />AOT is not a new idea. John Stuart Mill, in "On liberty", provided perhaps its most eloquent defense, using different terms. John Dewey argued that education should teaching something like it. After World War II, psychologists tried to understand support for fascism by looking at the personality traits of its supporters. It reached similar conclusions about individual differences and the role of open-mindedness, although this tradition largely neglected the potential role of education. It is on this latter point where the most recent research provides reason for optimism, even in the darkest times.<br /><br />Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com0tag:blogger.com,1999:blog-7219361852962974424.post-53800109755615110092016-11-13T16:08:00.002-08:002016-11-13T16:49:34.816-08:00The kind of thinking that led to Trump's electionDonald Trump's most fervent supporters consisted of elements of the Republican coalition as it has developed over the last several years. These elements are characterized by the form of their beliefs. They confidently hold beliefs without regard to the full set of relevant arguments and evidence.<br />
<br />
Some of their beliefs are admittedly based on "faith", which means that they are held confidently despite the absence of reasons that would matter to others who do not share their faith. These include both moral and factual beliefs: homosexuality is sinful; humans did not evolve from other animals; abortion is equivalent to murder.<br />
<br />
Other beliefs are accepted uncritically, without considering arguments on the other side, or, worse, attributing those arguments to a conspiracy. The belief that global warming is not caused by human activity is a prime example. Conspiracy theories about collusion by scientists guarantee that all arguments on the other side can be ignored. Thus, the holders of these beliefs have no way to discover that they might be incorrect, since all counter-arguments seem to come from a diabolical conspiracy. People hold these questionable beliefs with high confidence and are thus willing to impose them on others, through their political action.<br />
<br />
These beliefs have been supported by talk radio and Fox news. In turn, these sources are supported and encouraged by more traditional Republicans. We saw this influence earlier in the "Tea Party" movement, which feeds back into the content of these sources.<br />
<br />
More generally, these beliefs are the result of a pattern of thinking. People who hold them are lacking in what I have called actively open-minded thinking (AOT). AOT involves self-critical reflection. In order to be confident of a belief, we must look actively for reasons why it should be rejected, modified, or held with less confidence. If beliefs go unchallenged by this sort of reflection, we have no reason to know that they deserve high confidence. We have no way to distinguish those that would survive critical reflection from those that would not.<br />
<br />
Note that AOT is not equivalent to reflection in general. Some reflection is nothing more than bolstering, looking for reasons why we were right all along, finding ways to make everything fit with our preferred conclusion. AOT concerns the direction of reflection, not just its amount. The ultimate aim is to be fair to counter-arguments.<br />
<br />
Nor does AOT require that we never hold beliefs with high confidence. Often, an active search can find little to say for the other side, or can lead to a modified form of the original belief that is designed to deal with problems.<br />
<br />
AOT is built in to the scientific method. Science as an institution works to look for holes in current theories. Scientists get credit for finding such holes, and ways to plug them by modifying or discarding the theory in question. For these reasons, we should have more confidence in the conclusions of science than we have in the conclusions of ways of thinking that do not involve such self-criticism. More generally, our confidence in experts should depend on how much their expertise comes from AOT.<br />
<br />
AOT is apparently affected by cultural support. Higher education tends to support it (imperfectly, to be sure). I have been using a short questionnaire (based on a longer version devised by Keith Stanovich and his collaborators) that measures people's support for AOT. It has questions such as: "Allowing oneself to be convinced by an opposing argument is a sign of good character"; "Changing your mind is a sign of weakness" (reversed); "People should search actively for reasons why their beliefs might be wrong"; and "It is important to persevere in your beliefs even when evidence is brought to bear against them" (reversed).<br />
<br />
I and my collaborators <a href="http://dx.doi.org/10.1016/j.jarmac.2014.09.003">have found</a> that people who score low on this scale (opposing AOT) tend to score high on Jared Piazza's measure of belief in "divine command theory", the idea that God's laws are beyond the capacity of humans to fully understand and thus must be obeyed without question. Such a view is consistent with the sort of faith that leads people to advance moral views through politics even when they cannot argue for those views in terms that make sense to those who do not share their particular faith.<br />
<br />
Dan Kahan has collected data with our AOT questionnaire for a representative sample of the U.S. adult population. People low in AOT tend to rate themselves as more politically conservative and more strongly Republican. (The result is described at the end of <a href="http://finzi.psych.upenn.edu/~baron/ms/kahancmt.pdf">this paper</a>) This result is unusual because the same data set finds no correlation at all between conservatism and another measure of cognitive style, the Cognitive Reflection Test (CRT), a set of tricky arithmetic problems that is supposed to measure the tendency to reflect on initial conclusions. But I think the CRT is heavily influenced by mathematical knowledge, and it also tends to assess willingness to spend time on a problem rather than a self-questioning attitude as such. It does not specifically assess the willingness to look for reasons why a pet belief might be wrong. In sum, the results for the AOT show that, in current U.S. politics, conservatives really do think differently, in ways that help us understand their attachments to weakly supported conclusions about facts and morality.<br />
<br />
"Conservatives" in the U.S. tend not to think that AOT is a good thing. Many think it is OK to defend their beliefs as if they were possessions. Kahan did not look at Trump supporters in particular. (The data were collected a few years ago.) But it seems likely that this negative attitude toward AOT is more common in this group than in Republicans in general. Many people who identify themselves as conservative Republicans are very clear about their acceptance of AOT, although they tend to emphasize different issues when they discuss details. And, likewise, some liberals or progressives have the same negative attitude, although not as many of them, at the moment.<br />
<br />
On the optimistic side, my student Emlen Metz has found remarkably high levels of respect and capacity for AOT in 8th grade students across the U.S. from a variety of schools. It seems that the ideology of AOT has become widespread in the culture of schools. This is good news for the future. We need to take a page from our youth, to look to their desire to learn from each other and their hope in the future. We need to try to get everyone, journalists, teachers, and ourselves, to understand AOT more deeply.<br />
<br />
Even those who already favor AOT can benefit from understanding how it helps with communication, by giving us a better understanding of our discussion partners.Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com2tag:blogger.com,1999:blog-7219361852962974424.post-35296708242458397872016-11-09T08:57:00.003-08:002016-11-10T08:50:53.210-08:00Global warming: understanding, and acceptance of expert authorityDavid Leonhardt, in a recent column in the New York Times (on-line Nov. 9, 2016), argues that the most important response to the Republican takeover of the U.S. is to convince Republicans that global warming is real and serious. Failure to address this problem now has very long lasting effects.<br />
<br />
Most of the effort to convince skeptics takes the form "95% of climate scientists agree ...". This sort of argument from authority is useful only if two conditions are met. First, people have to understand that the authorities in question derive their status from a superior method, namely, actively open-minded thinking. Science is inherently actively open-minded because scientists are constantly looking for reasons why conclusions might be wrong. Scientists (and many other scholars) get credit for poking holes in conclusions, even tentative conclusions of their own. As J.S. Mill and others have pointed out, such active openness to criticism is the only way to have confidence in any conclusions. Without the search for counter-arguments, you don't know if they exist.<br />
<br />
Second, even with this understanding (which is not widespread), people may distrust a consensus view. Science works slowly. It took quite a while for people to accept Copernicus's alternative to Ptolemy's theory of the planets. Other false views have been the "conventional wisdom" in science for decades. People may suspect that the 95% of scientists are just engaging in herd behavior, not listening sufficiently to the 5% who disagree (if, indeed, they really exist).<br />
<br />
Here I think we need to point out to skeptics that the argument from authority is not the only one. In particular, the existence of the (somewhat misnamed) greenhouse effect has been well known for over 100 years. It follows from a couple of basic principles of physics (or physical chemistry). Thus, in addition to acceptance of authority, there is also a role for understanding.<br />
<br />
The two principles are roughly these: First, carbon dioxide (and other "greenhouse gases") absorb heat from infrared radiation, while the more basic components of the atmosphere (oxygen and nitrogen) do not. This fact can be demonstrated in table-top experiments, e.g. <a href="https://www.youtube.com/watch?v=kwtt51gvaJQ">https://www.youtube.com/watch?v=kwtt51gvaJQ</a>. Second, the earth's surface itself emits infrared radiation when it is warmed by the sun, which, in turn, warms the atmosphere, which then emits radiation as well, thus increasing the overall temperature of both air and land.<br />
<br />
Some understanding of these principles leads to the conclusion that global temperature will increase as the amount of carbon dioxide increases, other things being equal. This fact was well understood by the end of the 19th century. Thus, if we understand these principles, we can see that the burden of proof now shifts to those who want to say that warming will not occur. Other things being equal, it must occur.<br />
<br />
The extensive efforts of climate scientists have thus been mainly about determining whether these other factors are in fact equal, or whether some of them might reduce -- or increase -- global warming. It turns out that some reduce it and some increase it, leading to uncertainty about how large the effect will be, quantitatively. (This uncertainty poses risks of its own, as the effect could be larger than expected as well as smaller.) The 5% and the 95% may disagree about the relative importance of these other factors, without denying the basic facts.<br />
<br />
My conclusion is that, if people understood the basic science, they would have some additional reason to trust the expert consensus. It might help even if they just knew that it existed. Some attempt to explain the basic science should be part of the public argument. It should not be limited to high-school science courses.<br />
<br />
Likewise, actively open-minded thinking itself should be understood, not just accepted on the basis of authority.<br />
<br />Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com0tag:blogger.com,1999:blog-7219361852962974424.post-24912129668814662962016-06-25T10:39:00.000-07:002016-06-25T10:39:37.976-07:00What's wrong with parochialism?Recently popular political movements have been anti-immigrant, anti-free-trade, and more generally anti-globalization. What these positions share is a lack of concern for outsiders. For example, U.S. discussions of the Trans Pacific Partnership (which has many advantages and disadvantages for everyone) tend to ignore completely the apparent large benefits for Vietnam. The technical term for this lack of concern is parochialism. In part, parochialism is part of our political language. The use of "we" in refers to fellow citizens, sometimes even excluding members of recently arriving ethnic groups. But some people, in their thinking if not in their speech, consider effects on outsiders, or even think of themselves as members of larger groups such as Europeans or citizens of the world. Once this kind of cosmopolitan thinking was even fashionable, as expressed, for example, in John Lennon's (1971) song "Imagine", and it seems to be coming back into fashion among some young people in Europe.<br /><br />The simple argument against parochialism is that it is morally arbitrary, hence unjustified. The question of who should count in our moral judgments is a very basic one. The answer cannot be derived from competing philosophical approaches such as utilitarianism or deontology in general. So the usual attack on parochialism of any sort is to ask why a distinction should matter. This was the logical move made against slavery, racial discrimination, and discrimination against women. Of course, the defenders of these institutions sometimes tried to answer this attack by pointing to supposed empirical facts about, for example, how women's emotionality made them unsuitable as voters or office holders. But these arguments were ultimately recognized as post-hoc justifications, with little empirical basis. So the basic argument was, "If you care about what happens to X, why shouldn't you care equally about Y, even though Y is a different race, sex, or nationality?" This kind of logical argument is powerful, yet it is rarely made in public debates.<br /><br />One counter-argument comes from a different analogy, loyalty to close kin. Equal treatment of everyone would imply that you should care about a stranger's child, spouse, or parent as much as you care about your own. If it is morally acceptable to give preference to loved ones, why not co-nationals too? This objection has several possible answers. One I like is that morality should concern itself with choices among options that are on the table. And the option to sacrificing one's own child for a greater good is not something that most of us would consider. We could just not bring ourselves to do it. (More precisely, our willingness to sacrifice our own concerns and desires is limited, so we should make our decisions so as to do the most good overall within this limit.)<br /><br />Assuming that this argument works for loved ones -- and I think it does -- then could it also work for co-nationals? Yes, it could, if we feel such strong loyalty to our co-nationals. But we can take a step back and ask where our loyalty comes from. In the case of children, it is biologically determined. However, in the case of co-nationals, it is the result of an acquired abstract category. Even if humans evolved to be loyal to those in their immediate group of non-kin, the extension of group membership to total strangers requires a learned categorization of certain strangers as members of this group. Such categorization cannot plausibly be the result of natural selection, as it is, once again, arbitrary. If we can define "our group" as "German citizens", we could just as easily define it as "European citizens". People who reflect on this arbitrariness may come to change their loyalties.<br /><br />In sum, it may be too late for those who feel very strongly about their co-nationals. From their perspective, parochialism can be justified, assuming that they cannot modify their feelings by reflection. Yet we can still object to the cultural forces that lead people to think this way, including the assumptions of political discourse itself.<br /><br />A second line of argument for parochialism concerns the definition of responsibility that comes from the specific social roles. Social organization gives people decision-making authority in limited domains. When people violate these limits, they risk losing their authority, and they set a precedent for subverting a useful system. Police officers are not supposed to make decisions about punishment. That role is left for courts and judges, which are limited in yet other ways.<br /><br />This is also a good argument, but is the role of a citizen just to advance what is best for their co-nationals. Many citizens do not limit their role in this way, and they are not even considered to be bad citizens as a result. Recent immigrants often think about others from their country of origin who might also want to immigrate. Some people take into account the effects of policy on other countries to which they have secondary loyalty. And still others think about issues that affect the whole world, such as climate change. We have no written rule against such a view of citizenship, nor any obvious social norms. The narrow definition of the citizen's role as serving only national interest is one that some people arrive at by themselves. It is not part of the social structure of roles, unlike the roles of police officers and judges.<br /><br />Citizens do have a special responsibility toward their own nation, if only because they are in the best position to know what is good for it. They cannot rely on foreigners to decide on issues that have mostly local effects. But the exercise of this responsibility does not imply that outsiders should simply be neglected. It is a responsibility that applies much more to some issues than to others. As a citizen, we have a special responsibility to inform ourselves about national and local issues that don't have much effect on outsiders, and there are many of these. But just as our concern about city and state issues does not justify neglect of national issues, so our concern with national issues does not justify neglect the world outside.<br /><br />In sum, the justification for parochialism of the sort we see in current politics seems weak. Would it be possible to confront people with arguments against this view in general? We don't know unless we try.<br /><br />Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com0tag:blogger.com,1999:blog-7219361852962974424.post-59912365851440417112016-06-18T12:16:00.002-07:002016-06-18T12:16:54.056-07:00Learning social rulesI just read a forthcoming paper (in Mind and Language) by Shaun Nichols and several others, which argues that it is rational to develop moral rules that distinguish (for example) acts and omissions. The relevant idea of "rational" is from rational concept formation.<br /><br />When you learn a new concept, it is best not to generalize it too much. In some experiments, subjects were given examples of rule violations, for learning, and tested with other examples. When the learning examples were of the form "X did an action A that caused outcome C to happen", subjects generalized this to similar test examples with other examples of A and C. But they did not consider examples of the form "X failed to do B, which would have prevented C from happening" to be violations of the rule. In order to teach subjects that the rule applied to omissions as well as acts, the training had to include omission cases as examples of rule violations.<br /><br />This behavior of subjects makes perfect sense in the case of arbitrary rules, and even legal rules. But I was bothered because I don't think moral rules should be arbitrary in this way.<br /><br />One possible explanation of the difference is that sophisticated moral rules arise from reflection on the social rules that we have learned. Specifically, we reflect by asking questions about purposes (which I call "search for goals" in some places). When we see an example of a rule and ask about its purpose, we might discover what general purpose it serves. We can then think about how to generalize it so that it serves that purpose. If it does not serve the purpose in some cases, or if it could serve the same purpose better by a modification, then we can think about improving it.<br /><br />The same process is part of what it means to understand a design such as a mathematical formula, according to my interpretation of David Perkins' book "Knowledge as Design". For example, we understand the formula for the area of a parallelogram (and its associated arguments) by finding that the argument for this rule serves the purpose of converting the parallelogram to a rectangle, and we already know how to find the area of a rectangle. Once we discover this connection, we can apply the same principle elsewhere, as Max Wertheimer shows in the first chapter of "Productive thinking". We can transfer the principle to cases where it applies while avoiding transfer to other cases.<br /><br />Similarly, a law with a "loophole" is an example of a rule that is crafted in a way that fails to serve its purpose. We can fix laws by removing loopholes.<br /><br />A law that gives rights, such as the right to vote, drive, or own property, to men but not to women does not seem to serve reasonable accounts of the purposes of such rights-granting laws. We have trouble coming up with a purpose that applies to men but not women. Any such purpose seems arbitrary; it could just as well distinguish people with odd and even birthdays. Such a search for purposes is, I think, the sort of reflection that Peter Singer discussed in "The expanding circle".<br /><br />Thus it is one thing to learn a rule, but it is another to understand the rule in a way that allows us to ask whether it serves its purpose as well as it could, and, if not, what could replace it. It may be rational from the perspective of learning to learn whatever we are taught about what to do and not do, but, if this is all we did, we would cut off the possibility of improving these rules.<br /><br />Could most deontological rules survive this kind of questioning?<br />Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com0tag:blogger.com,1999:blog-7219361852962974424.post-25872630819715539522016-06-01T15:11:00.002-07:002016-06-01T15:13:20.303-07:00Alternatives to mediation (in data analysis)The following is not vetted. It is some thoughts inspired by several papers I have dealt with recently. It is also about statistics, a new topic for this blog, but one I will probably write more about.<br />
<br />
In some studies we measure several variables, and we are primarily interested in the correlation between two of them, e.g., cognitive style and political ideology. When this correlation is found, we are also interested in what the other variables can tell us about why this happens. For example, we might be interested in things like religiosity, or education. Cognitive style might, for example, affect religiosity, and religion, in turn could affect social conservatism, which is (in some countries) related to religious teaching. Let's call the target variable, political ideology, Y, and the main predictor X, and the other variable M (for "mediator" or "middle"). Assume that X and Y are correlated, and this correlation is what we are trying to explain.<br />
<br />
The logic of classical mediation is best understood in terms of simple and partial correlations. For this purpose, partial correlations are equivalent to regressions, since the significance test is the same. If the dependent variable is Y, the predictor is X, and the mediator is M, we need to show that r(X,M) and r(Y,M|X) are both significant. The second is a partial correlation (or regression coefficient, or semi-partial). The first is consistent with the claim that X affects M. The second is consistent with the claim that M affects Y and that this effect is not the result of the correlation of M with X.<br />
<br />
Mediation tests are most useful when X is an experimental manipulation. Even then we worry about r(Y,M|X) being an artifact. It could be that the causality is an affect of Y on M rather than an effect of M on Y. Or, Y and M could both be affected by some un-measured fourth variable. We could avoid these problems by experimentally manipulating both X and M. Even then one might argue that the experimental manipulation of M is affecting something different from the M that varies spontaneously in the population or the M that is affected by Y.<br />
<br />
More generally, in many tests of mediation, almost anything could cause anything else. Moreover, if X, M, and Y are all influenced by roughly the same causal factors, then M will "mediate" the "effect" of X on Y, or the "effect" of Y on X, if M is just the variable that is most highly correlated with these underlying causes. I have never seen a mediation analysis that attempts to correct for the extent to which the different variables correlate with the causal factors that each is supposed to be sensitive to. This sort of validity coefficient surely affects what counts as a significant mediator and what does not.<br />
<br />
Note also that any test of mediation is about variation. It is possible that M does affect Y but that the variation in M is mostly error by the time you remove the common variance between M and X (by partialing).<br />
<br />
So what should we do instead? One thing is to look at the simple correlations between X and M and between M and Y. If both are large enough (with "significant" being one criterion of that, but significance depends on sample size, which is irrelevant here), then we would conclude that variation in M is a possible explanation of the correlation between X and Y. It correlates with both of them. If it does not correlate with one of them, and if we have no reason to expect any additional variables that affect M and X or Y in opposite directions (thus obscuring a real correlation), then M could not explain the X Y correlation. (For example, X correlates with sex, but sex does not correlate with some other measure Y. Hence sex is not a possible explanation.)<br />
<br />
Here "explaining the correlation" simply means that some source of variation exists that affects X, Y, and M. The fact that M is part of this list tells us something about what that source of variation might be.<br />
<br />
Can we say more than this? Consider a stricter criterion. Suppose we regress M on X and Y, and we require that both regression coefficients are present (high enough by some standard). Such a result would seem to rule out the possibility that r(X,M) or r(Y,M) are high simply because X and Y are correlated. Suppose, for example, r(X,M|Y), the regression coefficient or partial correlation, is zero even though r(X,M) is positive. This would suggest that X does not really have any common source of variation with M.<br />
<br />
This does not quite follow. For example, it could be that X and M are affected equally by some set of variables Z, but X is affected by some additional variables that also affect Y. Thus, X and M are redundant measures of some of the factors that affect X, M, and Y. Similarly, it could happen that X and M are affected by exactly the same set of variables, but X is a more reliable measure than M. This could reduce the role of M to zero in a regression model.<br />
<br />
However, if we regress M on X and Y and find that both coefficients are high enough, then it is more plausible that M is indeed capturing some of common variance affecting X an Y (compared to the simple correlations of M with X and M with Y).<br />
<br />
In general, I do not think we can learn much from anything other than the simple correlations r(X,M) and r(M,Y). If both of these are positive, then whatever M "measures" is a possible source of variation that accounts for the correlation between X and Y. But regression of M on X and Y could also be useful, if both coefficients are positive.<br />
<br />
Mediation tests do have some uses. They can be useful as a manipulation check, a way of testing whether an experimental variable did what it was supposed to do. And, if its effect varies across subjects, does the variation help to explain the variation in outcomes. For example, cognitive therapy for depression (manpulated) changes how people think about the causes of bad events (measured by the Attributional Style Questionnaire), which, in turn, affects their depressive symptoms. The therapy is focused on the thinking, not the symptoms, so this is a manipulation check.<br />
<br />Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com0tag:blogger.com,1999:blog-7219361852962974424.post-14302760403888115642016-05-08T16:17:00.001-07:002016-05-08T16:17:37.460-07:00AutonomyMany principles of "ethics" are designed to protect autonomy. Regulations concerning human subjects in research take a broad view of what counts as "coercion", to the point of including excessive rewards that make it difficult for potential subjects to refuse. Programs to reduce excessive birth rates were criticized as violations of autonomy, even if they consisted of advertising and TV soap operas (as I discussed in my book "Judgment misguided"). Now, some of the opposition to "nudges" is that they, too, violate autonomy, because they influence people, which is, of course, their purpose.<br /><br />This concern has always struck me as a little strange, because it seemed to me that well-intended attempts at influence were a drop in the bucket of all the things that influence decisions, including much more blatant and less justifiable attempts by others. Nudges from government are minor influences compared to marketing strategies by corporations, or expectations of families. Philosophically, one might even ask what is left if all these influence are removed. We are all influenced by our culture.<br /><br />But the drop-in-the-bucket argument is weak at best. If autonomy violations are wrong, then the existence of large amounts of wrongness in the world is no justification for adding more.<br /><br />A more interesting property of ethical concerns about autonomy violations is that all (or almost all) of the alleged violations are actions. The operating moral principle seems to be that we should not take action to influence the decisions of others, but we need take no action to prevent autonomy violations that already exist. If women in poor countries are being pressured by their husbands and in-laws to have more children than they want, those concerned about excessive undesired births are under no obligation to try to prevent such pressure. Similarly, if advertisers induce people to eat lots of unhealthy food, the operating principle does not oblige is to try to stop them, although it prevents us from trying too hard to nudge them in the other direction.<br /><br />Thus, the problem with all the existing influences is not that they are a large bucket. Rather, it is that the concern with autonomy arises from a moral principle that distinguishes acts and omissions and is thus not primarily concerned with consequences. That is, it is a deontological rule. If people follow this rule, as compared to one that brings about the best consequences, then it makes things worse.<br /><br />The stereotype of the Jewish mother is someone who does not hesitate to butt into her children's affairs, or those of other people, for their own good. Perhaps we need more Jewish mothers. They won't always be correct about what is good for other people, but trying to act in this way might have a higher success rate than doing nothing in the name of autonomy, and thus ceding control to those who base their efforts to influence others on their own self-interest or narrow ideology.<br /><br />Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com0tag:blogger.com,1999:blog-7219361852962974424.post-51478516222708469742016-02-27T07:10:00.003-08:002016-02-27T07:10:56.169-08:00Preliminary thoughts on a utilitarian decision analysis of parochial votingSuppose that you can vote in a referendum on a proposal. You believe that the proposal is good for the whole world, but bad for your particular group. For example, it might be a proposal to accept a trade agreement that would be beneficial on the whole but would hurt your particular business. Or it might reduce global warming but hurt the coal industry, and you live in a coal-producing region. How should you vote? The options are to vote for the proposal, vote against it, or not vote.<br />
<br />
Assume that voting has a small cost. You must take the time to do it, and get to the polling place. Let us first consider only this cost and the effects of the proposal itself (regardless of whether you vote for it or not). The probability that your vote will affect these consequences is very small, but not zero. The expected benefit of a vote for one side is the probability times the benefit if your vote matters. It is well known that the expected benefit to you personally is usually not enough to justify the cost of voting, even though the cost is small. If this benefit and the cost of voting are all you care about, then, in terms of these values, you should not vote.<br />
<br />
Now add the assumption that you care about other people, not as much as you care about yourself, but some. Clearly, if the proposal can affect enough other people whom you care about, it becomes worth the small cost of voting, in terms of these values. If you care equally about all other people, then (by assumption) you should vote for the proposal, because it is best for the world.<br />
<br />
If, on the other hand, you care only about other people in your group, it could be worth your time to vote against the proposal. This conclusion is less likely for two reasons. First, the group is smaller. Second, many members of your group may care about outsiders, enough so that they would prefer the proposal to pass. Your caring about them should consider their altruistic values as well as their self-interested values, and this inclusion would reduce further the value to you of your vote against the proposal. Given these considerations, it is less likely that voting against the proposal would be worth your time, compared to the case in which you care about everyone equally. (You may also care some about the world but more about the members of your group. This would be in-between.)<br />
<br />
More generally, though, this analysis suggests that people who care about the world are the ones who are most likely to find voting worthwhile, and these people should vote for the proposal. Those who care only about their group are less likely to find voting worthwhile, but, if they find it worthwhile, they should probably vote against the proposal (unless they think that most members of their group prefer the proposal to pass).<br />
<br />
Note that voting against the proposal can be seen as a vote to harm the outsiders. (And not voting at all is a lesser harm, because it has half the effect on the outcome, compared to voting for the proposal.) From most moral perspectives, this harm is difficulty to justify. Thus, we could argue that the sort of parochial values that lead to this vote are ones to be discouraged. They are, in a sense, "evil desires".<br />
<br />
Now consider the possibility that the act of voting has positive benefit to the self, as well as a small cost. This is, again, about the behavior of voting and not about the consequences of the adoption or rejection of the proposal. Suppose you get some pleasure out of cheering for your team, or from cheering for humanity.<br />
<br />
Note that the pleasure of cheering could possibly be obtained without actually voting. You could, for example, go to political rallies and participate with the crowd in cheering one side or booing the other side. If you did this on the side against the proposal, you could get this pleasure without hurting any outsiders. However, it is possible that the pleasure is greater when you actually vote. This is part of the ritual. In this case, those who are motivated this way are selfish in a way much like those with parochial values. They take pleasure in behaviors that hurt other people, without (by assumption) compensating benefit for anyone else. This too is the sort of desire we have reason to discourage.<br />
<br />
Moreover, it is possible (even likely) that the pleasure of acting on behalf of your group is based on a failure to understand or appreciate the more basic argument given above. People may think that supporting their group is the morally correct thing to do, in terms of the consequences of the proposal being adopted or not. If people were convinced that voting for the proposal is better in terms of its consequences, they would take less pleasure in voting against it.<br />
<br />
Likewise, if people understood the low expected utility of voting against the proposal for others, even those in their group, they might not get enough pleasure from voting against the proposal to make it worthwhile for them to vote at all.Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com0tag:blogger.com,1999:blog-7219361852962974424.post-81177637124484577792015-01-17T07:39:00.000-08:002015-01-17T07:39:50.340-08:00What citizens may need to understand about democracyI have been applying for grants that so far I do not get, but I have summarized the idea that I want to test, so I am putting the gist of it here, extracted from the last proposal submitted (in collaboration with David Rand at Yale).<br /><br />Democracy is a human invention, a "design" that serves certain functions. But citizens do not understand it very well, and, as a result, they often fail collectively to take full advantage of what it can do. Here is what many people don't understand.<br /><br />Political participation is cost-ineffective in advancing self-interest or the parochial interest of a small group, but is cost-effective in advancing the common good. The small influence that each citizen has over policy outcomes renders political participation nearly useless as a way of advancing self-interest (Downs, 1957). But this small influence is more than compensated by the large number of people potentially affected by these outcomes, if the citizen has sufficient concern for these others (Edlin, Gelman \& Kaplan, 2007, 2008). In terms of self-interest the expected benefit of a vote is proportional to 1/N, where N is the number of voters. But in a utilitarian calculation, which assumes some altruism (concern for others), the benefit of voting must be multiplied by N, assuming that only voters are affected by the outcome. This multiplication cancels the dilution caused by being one voter among N. And the total benefit can be orders of magnitude larger if we think in terms of effects on those who do not vote, such as children, foreigners, and future people who do not exist. The larger the circle we consider, the more the effect of a vote is magnified, and, therefore, the more worthwhile it is to use ones vote (or other forms of participation) to advance our concerns for the good of others.<br /><br />Imagine you are faced with a conflict. You believe that a proposal would be good for the world but bad for you. Should you vote for it or against it? Of course it depends on how good it is for the world and how bad it is for you. But even if it is very bad for you it is unlikely to be worth your time to vote against it, because the probability is very low that your vote would matter. On the other side, if it is reasonably good for the world, it would affect many people, and that could make it worthwhile to vote for it, if you care somewhat about all those people.<br /><br />The same sort of conflict arises between self and nation, or nation and world, or present and future. "Nation" is an in-between case. But a proposal that harms your nation a little but helps the world a lot would still be worth voting for, again because the numbers matter. In many cases what is good for your nation is also good for the world. Nationalism, in which people promote the good of their nation regardless of the harm done to outsiders, is a form of parochialism.<br /><br />Moreover, because government can coerce people to cooperate (behave in ways that benefit everyone rather than the self alone), the cost of supporting government in doing this, by voting, is much smaller than the cost of cooperating voluntarily. Because of this principle, people often vote to give government the power to make them and others do what they would not be willing to do if left to their own devices. For example, voters sometimes vote for tax increases but almost never donate money to the government.<br /><br />If citizens around the world understood these arguments, they would still disagree on what constitutes the common good, but surely some of the current malfunctions of democracy could be mitigated.<br /><br />The idea that government is a design to provide for the common good by solving social dilemmas (or providing public goods) is not widely understood; some people seem to think that government sanctions are never justified (except perhaps for military defense and property protection), or that government is a tool to be used for competition between parochial groups.<br /><br />The idea that democratic government is a design to serve the common good should not be difficult to understand. Thus, failures of understanding can be attributed to failures of education.<br /><br />It is apparent to anyone who follows the news that the idea of democracy is not on a smooth path to universal approval and success in bringing about the benefits of good government. The economic successes of some authoritarian countries (such as China) have given many people the idea that democracy is unnecessary, when, in fact, these are probably isolated lucky cases, contradicting the general result. A better understanding of how democracy works could lead at least to a little healthy skepticism about whether such authoritarian governments can continue to produce good results, or, alternatively, whether they will increasingly attempt simply to maintain power, whatever the costs.<br /><br />Many countries seem unable to switch from authoritarian to democratic system, and this problem is often attributed in part to the necessity of cultural support for the idea of democracy itself. Part of this support may consist of the kind of understanding described here, in particular the idea that the obligation of a citizen is to help advance national interest rather than tribal interest or self-interest. (The potential conflict between national interest and world interest is not at issue here, since the formation of stable democracy in a nation is almost always in the world interest as well.)<br /><br />Even long-standing and stable democracies seem unable to make democracy work as well as it should, as a result of such problems as polarization and "populism". Part of the substance of polarized conflicts involve understanding of what government can accomplish and why it is needed, and understanding of the limits of parochialism. And populism often involves a different sort of narrowness, namely, excessive attention to immediate effects at the expense of long-term effects.<br /><br />Increased understanding of democracy could be the low-hanging fruit in attempts to improve the human condition by improving the functioning of government. If it results in changes in civics education, these are relatively inexpensive. Possibly the failures of democracy are the result of misunderstandings of how democracy should work, and correcting these misunderstandings even in a minority of citizens can have substantial effects on outcomes. Note that civics education does not happen only in schools but also in efforts at ``nation building'' through instruction and outreach, and through political communication more generally.<br /><br />References<br /><br />Downs, A. (1957). An economic theory of democracy. New York: Harper and Row.<br /><br />Edlin, A., Gelman, A., & Kaplan, N. (2007). Voting as a rational choice: Why and how people vote to improve the well-being of others. Rationality and Society, 19, 293–314.<br /><br />Edlin, A., Gelman, A., & Kaplan, N. (2008). Vote for charity's sake. The Economists’ Voice, 5, (6), article 6.<br />Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com0tag:blogger.com,1999:blog-7219361852962974424.post-9342674729897780192014-12-11T09:00:00.002-08:002014-12-11T09:00:24.839-08:00The report on torture used by the CIA<br />
<br />
<a href="http://www.npr.org/2014/12/11/370022528/talking-point-or-proof-of-value-cia-tactics-and-the-second-wave-plot">An NPR interview</a> this morning made two interesting points about the recent report on torture by the CIA, the first of which I had thought of myself. This was that it is possible that some authors of the report, or perhaps just those who summarize it, are engaging in belief overkill. They are saying that the extreme methods of interrogation <i>never</i> did any good. They don't need to say this. They could just say, more cautiously, that the amount of good done was so small as not to justify the use of these methods. Belief overkill is the distortion of beliefs so that all conflict disappears and no arguments oppose the favored conclusion, even if the conflict would be easy to resolve.<br />
<br />
The second point, made in passing, was that it may in fact be impossible to find a case in which the extreme methods led the CIA from ignorance to certainty all at once. But intelligence does not work like this. It is all about probabilities. (The interview did not use this term, but it was implied.) If a confession extracted under extreme pressure changed the probability of some conclusion, such as an immanent attack or the presence of a terrorist at some location, the probability may change enough to warrant a change in plans. Such changes could happen if a confession agreed, or disagreed, with other evidence already available.<br />
<br />
Given the possibility that probabilities were affected, the difficulty of claiming that these methods "never mattered at all" seems to increase. Again, this claim is not needed. The methods are bad in themselves and in the precedents they set for others.<br />
<br />Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com0tag:blogger.com,1999:blog-7219361852962974424.post-21966377796571111472014-09-16T12:48:00.000-07:002014-09-16T12:48:38.710-07:00Foreign contributions to U.S. think tanksThe following (slightly revised) was a letter to the Public Editor of the New York Times, written on 9/15/2014.<br />
<br />
I was upset at the Times's coverage of foreign contributions to U.S. think tanks such as Brookings. I was reminded of this by the recent edition of The Economist, which points out how attacking these institutions for accepting contributions could make it seem more acceptable when authoritarian regimes elsewhere try to block contributions to NGOs that might threaten them (p. 20, Sept 13-19 edition).<br />
<br />
As one letter pointed out, most of these think tanks do not hide the sources of their funds. It would be bad if they did that, but they don't. Beyond this, I do not see what is the least bit wrong with accepting foreign contributions.<br />
<br />
1. U.S. policy has major effects on foreigners. Why should they not be able to try to make sure that we at least know about such effects? We ought to consider them more than we do. If the foreign contributions do succeed in bringing different points of view to attention of U.S. citizens and government, just what is so bad about that? Are we supposed to set policy without considering such effects? As a citizen and voter, I certainly do not think this way, and I am not in a tiny minority on this point.<br />
<br />
2. It would be at least a little hypocritical for the U.S. to get up on a high horse about government funding of private institutions in other countries, including NGOs. The U.S. government has done this for decades, and, I believe, still does it. (And I don't see anything wrong with what we do either.)<br />
<br />
3. The Times articles hinted that the conclusions of research organizations are influenced by their funding. I have seen several denials of this, and I believe that the culture of places like Brookings is against such influence. But, as a professor in a major research university with much the same culture, I can see that such influence happens, in subtle ways. And it is not the least bit limited to foreign sources. (I have received grants from Israel, and I do not feel any pressure at all to conform to policies of the Israeli government, although perhaps the probability that I would participate in an academic boycott of Israel has gone from .01 to .001.) The U.S. government itself has distorted research in major ways. Right now it is engaged in a massive campaign to boost biological research at the expense of traditional psychology, which I think is a mistake. In my own field, partly as a result of this shift, most research is now supported by the Templeton Foundation, which has as its ultimate purpose the rapprochement of religion and science. The topics it supports are limited to those that can be shoe-horned into this framework, so I and some of my colleagues have learned to speak a certain way, in order to get money from them. (Of course, they too have been flexible in choosing topics that would not offend atheists too much, lest they offend 90% of the research audience they are trying to reach.) I could go on with story after story about how research has been distorted by influences of this sort, including large amounts of money from corporations such as Exxon. But is it even distortion? or is this the way science is supposed to work, with inputs from consumers as well as producers? In sum, I don't see why foreign influences are being singled out as distortions of research, when they are in essence a drop in the bucket.<br />
<br />
4. Many of the "foreign interests" mentioned struck me as extremely odd. Apparently Norway gives money to Brookings, and this is supposed to be in support of Norway's government policies. I do not know which policies are at issue here, but Norway is a tiny country with probably the most enlightened policies in the world concerning the alleviation of world poverty and protection of the environment. Yes, they have oil, but they know it is running out. I find it difficult to believe that the money they give to Brookings is in hopes that Brookings will release a report favoring off-shore drilling off the Norwegian coast. The drilling is a done deal. More likely, Norway contributes to Brookings in hopes that it will help solve some environmental problem.<br />
<br />Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com0tag:blogger.com,1999:blog-7219361852962974424.post-21377877250827792552011-06-09T15:55:00.000-07:002011-06-09T15:57:41.949-07:00Isolation effect in world problems<a href="http://www.nytimes.com/2011/06/08/opinion/08friedman.html?hp">This column</a> by Thomas Friedman reminded me of an issue that has been on my mind for a while, the inter-relatedness of world problems. I discussed it in the book for which this blog is named, especially the chapter on population.<br /><br />The world today has a long list of inter-related problems: food, fresh water, energy production, biodiversity, rising oceans from higher temperatures (resulting in shrinking coastal lands where many people live), health, unemployment of the young, catastrophic risks from technology, armed conflict, poverty, over-population, and antagonism against migrants. These problems compete. Efforts to reduce global warming with clean energy require land for wind farms and solar. Biofuels compete with agriculture. Rising oceans lead to migration.<br /><br />Many people concerned with these problems view them in isolation. Some groups are concerned with food. Others with biodiversity. Others with energy. <a href="http://arstechnica.com/science/news/2011/03/how-to-feed-9-billion-people-the-future-of-food-and-farming.ars">Here</a> is an example.<br /><br />This isolation of problems closes off the possibility of solving some problems as a side-effect of solving others. It may be more efficient to solve the problem of immigration than to prevent the rise of the oceans. It may be easier to reduce the growth of population than to exploit more energy, after some point in doing the latter. The idea that each problem requires its own solution is much like the <a href="http://www.leighthompson.com/media/negotiating_daily_life.htm">fixed-pie bias</a> in negotiation, where people tend to negotiate one issue at a time rather than look for trade-offs among issues.<br /><br />When problems are isolated, the focus on one makes all the others seem uncontrollable, out of the picture. Discussions of food and water always begin with the point that population growth is going to cause problems, but then they view population growth as something that is uncontrollable.<br /><br />In fact, it would be relatively cheap to simply meed women's unmet demand for birth control, as a start. Other steps, only slightly more costly, such as improving the education of girls, would reduce desired family size. Part of the problem is religion. It is opposition from the religious right that prevents the U.S. government from spending more on international family planning, for example. The idea of promoting birth control has become so politically incorrect that even the organizations that do it, such as <a href="http://www.engenderhealth.org">Engender health</a>, do not emphasize it in public.<br /><br />This is just one example. When problems are interrelated, the solution to any one of them helps all the others, and the search for efficient solutions should take this into account.Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com0tag:blogger.com,1999:blog-7219361852962974424.post-40261651213121220702011-04-08T04:53:00.001-07:002011-04-08T04:57:06.367-07:00AbortionPolitical disputes about abortion usually involve repetitions of bad arguments, empty slogans, and upsetting images. The assumption seems to be that reason is irrelevant and the important thing is to motivate those who are already convinced. It is as if everyone has accepted the theory that moral reasoning is post-hoc rationalization and that moral disputes in are intellectually no different from sports events in which fans cheer for one side or the other.<br /><br />Here is a very brief summary of the argument for abortion, drawing heavily on the work of Peter Singer (especially in "Practical ethics"). The point is that reasoning is relevant. (I discuss relevant issues at greater length elsewhere, particularly <a href="http://www.sas.upenn.edu/~baron/papers.htm/normu.html">here</a>.)<br /><br />Abortion is indeed killing, but that does not settle the issue. Nor does it settle the issue to say that the fetus is "human", since this still begs the question of when and why it is wrong to kill a human. The following reasons come to mind:<br /><br />First, killing a fetus is a harm to the parents if the fetus is wanted. This is an issue only if the abortion is disputed by (for example) the father. Usually this is not an issue, and is surely not the issue that riles the anti-abortion movement. When relevant, it is a family dispute.<br /><br />Second, the means of death can be painful. This is possibly an issue for late-term abortions, when the pain system is developed. The solution would be anesthesia for the fetus. The same argument would apply to the killing of animals. This too seems beside the point most of the time.<br /><br />Third, abortion prevents a stream of future experiences for the person who would be born. On balance these experiences will probably be positive, relative to not having them at all. This argument applies to animals as well as people. It applies to any choice that prevents a person (or animal) from existing, not just abortion but also birth control and abstinence from sex. Carried to the limit, it would amount to a command to "be fruitful and multiply", until we reach the point where the world is so crowded that the totality of negative experiences resulting from an additional person was as great as the totality of positive ones. (Derek Parfit discusses this issue at length in "Reasons and persons".)<br /><br />Although this argument is interesting, I do not see why we should accept it. If we go one step back and ask why experiences are valued, we find that they are valued because people want them. That is, people have goals or wants for having good experiences (and, presumably, so do animals). So, when we create a person, we are in essence creating goals (or wants) and then satisfying them. But, if the goals do not yet exist, why is it a requirement of morality or rationality to create them just so that they could be satisfied.<br /><br />For example, why are you obliged to inculcate in me a taste for contemporary pop music? Even if it is true that, once I had the goal of listing to pop music, I would get positive experiences from listing to such music, it is possible that I do not want that goal. It might be inconsistent with my other goals. The Golden Rule thus implies that you have no particular obligation to create goals in me, because I may not want those goals. Nor is it necessarily rational for me to create such a goal or taste in myself. Whether I should do this depends on my other goals.<br /><br />We have no moral obligation to create beings just so that we can satisfy the goals that come into existence. The Golden Rule does not apply here because the "others" in "do unto others" do not exist. It is the goals of those who exist that determine whether it is rational or moral to create new goals.<br /><br />A final issue is potential. It is true that a fetus is a potential adult human. So is every sperm-egg pair, and it is hard to see why their physical joining together is relevant to the argument concerning potential. It is just a salient step in the pathway. But the argument from potential raises the same questions as argument from experience. It is not clear why it is moral or rational to create new people and new goals, if doing so is inconsistent with our current goals.<br /><br />Some of our current goals, in fact, may imply that limiting births is a good thing. We want humans to have good lives once they come to exist. They have goals and wants then. (Of course, we also want enough of them to insure the long-term survival of humanity, but, arguably, long-term survival is more likely if the rate of population growth is slower than it is now.) We want particular children to have good lives. If we are going to limit family size, then we want to time the bearing of children so that they will be maximally wanted when they arrive, and maximally likely to develop well.<br /><br />If this sounds like an argument for "abortion as a method of birth control", it is. But it does not imply that abortion is just as good as any other method of birth control. Clearly abortion has many disadvantages, including emotional effects. But these do not make it worse than no birth control at all. And often, as in the case of fetuses with serious genetic impairments, pregnancy complications, or failure of other methods, abortion is not the method of choice, but a fall-back.Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com1tag:blogger.com,1999:blog-7219361852962974424.post-41069180758601256552011-03-22T07:39:00.000-07:002013-09-17T13:52:39.641-07:00Is the act-omission distinction strategic? Comment on DeScioli et al. (2011).In the current (March 22?) issue of Psychological Science, DeScioli et al. <a href="http://pss.sagepub.com/content/early/2011/03/01/0956797611400616.full"> report</a> a nice demonstration of how people take punishment into account in choosing how they will go about hurting someone else.*<br /><br />In the main study, subjects could divide a dollar as (90,10), (10,90), or (85,0). The first number represents the divider's outcome, and the second represents another person's outcome, in cents. The (85,0) condition was the result of letting a timer run out, hence doing nothing. The (90,10) and (10,90) conditions were explicit choices.<br /><br />The main result was the subjects often chose (90,10) when they did not expect to be punished by anyone, but they chose (85,0) much more when they knew that they could be punished by a third party.<br /><br />The authors explain this result in terms of the difference between acts and omissions. They consider the (85,0) response to be an omission. Punishment for (85,0) was (demonstrably) less than for (90,10), and third-party punishers did think that the (90,10) option was worse. (Interestingly, the victims did not make a distinction, but the paper correctly points out that they may have been focusing more on outcomes than on the morality of the choice; such a focus would be consistent with the findings of <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0006699"> Cushman et al. (2009)</a>.)<br /><br />They go on to conclude that the bias toward harmful omissions over equally (or less) harmful acts could result from anticipation of punishment and thus be, in some sense, rational.<br /><br />I have several problems with the conclusions. First, the (85,0) option probably differs from the (90,10) option not just in being an omission but also in being seen as less intentional. Letting a timer time out could result from inattention, confusion, or difficulty in deciding what to do. To the extent to which a harmful choice is unintentional, it should not be punished as severely (by any account, including the utilitarian accounts that DeScioli et al. dispute -- it is, after all, less deterrable if it is an accident). Thus, the choice of letting the timer time out could be understood as obfuscating the intent to gain at someone else's expense, rather than as an omission. The experiments had no manipulation check for equality of intention. By contrast, <a href="http://www.sas.upenn.edu/~baron/papers.htm/oc.html">Spranca et al. (1991)</a>, which DeScioli et al. cite as an example of what they are going against, took great pains to show that intention was equated between acts and omissions, so that they could truly conclude that the perceived moral difference between acts and omissions could not be explained in terms of a normatively correct distinction based on intention. Thus, the experiments may have nothing to do with the act/omission distinction in its pure form, with intention equated. When I have made normative arguments about the moral relevance of the act-omission distinction, the pure form is the only relevant one. We know nothing, yet, about whether the distinction that people make in this form is influenced by anticipation of punishment.<br /><br />On the other hand, the claim that omission bias in general could be explained by anticipation of punishment is roundly refuted by many examples, including those cited by DeScioli et al., yet ignored in their discussion. For example, several of those papers show omission bias (my word for the distinction) in vaccination decisions, where, in fact, punishment is more likely for the omission, not vaccinating, than the act. In other cases, it is clear that punishment is out of the question because the intent to do harm would not be detected, and the possibility of punishment is not mentioned (just as in the control condition of DeScioli et al.).<br /><br />Note that DeScioli et al. cannot test for a bias in the absence of punishment, because their "omission" condition (85,0) is actually worse for both people affected, thus failing to equate the two conditions.<br /><br />The penultimate paragraph says: "Our experiments are relevant to a broader issue about how traditional normative theories are used in psychology. Previous work labeled the omission effect as a bias because people’s decisions violated normative theories. Although normative theories can be useful for applications such as policy making, the present work illustrates an important limitation. By measuring performance against normative theories, researchers misleadingly label strategic decision making—choosing in a way that takes into account how other people will respond—as error (Cosmides & Tooby, 1994; DeScioli & Kurzban, in press). This mischaracterization can preclude deeper investigation into the highly organized mental processes that regulate decisions in strategic environments."<br /><br />Now let us suppose (putting aside the above problems) that the paper had supported its empirical claim that the distinction between acts and omissions in choice was entirely the result of differences in anticipated punishment. So there is no bias in choice. But there is still a bias in third-party judgments. Can those too be explained as strategic? I don't see how, without assuming some bias somewhere in the system.<br /><br />*I am commenting here because Psychological Science does not publish critiques of this sort.Jon Baronhttp://www.blogger.com/profile/18323381384184068068noreply@blogger.com1