Tuesday, November 12, 2024

The dark triad of citizenship: parochialism, moralism, and closed-mindedness.

I have seen lots of arguments that political polarization is the result of intolerance on both sides, and those of us opposed to MAGA supporters (those who accepted the ideology, most of whom voted for Trump) should try to understand them and look for common ground.

My view is that voters are thinking people, and many are thinking very badly, to the point of being immoral. Personality psychologists have a concept of the "dark triad" of personality traits: Machiavellianism, Narcissism, and Psychopathy. What holds these together is that they are explanations of behavior that is fundamentally "malevolent", i.e., immoral. The study of the dark triad is about the study of immorality.

I take the same attitude toward MAGA supporters. I have no interest in finding common ground with them any more than with wife beaters. I do want to understand them better, from a scientific point of view, in hopes of reducing their numbers in the future.

Most MAGA supporters were not good citizens. I have, in several papers (summarized in "Social norms for citizenship", https://muse.jhu.edu/article/692751/pdf) outlined what I take to be the norms required for citizens to make democracy work for the greater good. These include cosmopolitanism, anti-moralism, and actively open-minded thinking (AOT). The absence of these constitutes a dark triad for citizenship: parochialism, moralism, and closed-mindedness.

On parochialism, I have argued that voting is not worthwhile if all you care about is self-interest, or even narrow national interest, although it is worthwhile if you care enough about everyone affected in the world now and in the future. My latest attempt to publish a paper clarifying this argument was rejected from a philosophy journal roughly on the grounds that it was nothing new. Other philosophers write papers on the question of whether nationalism is as bad as racism. So far as I can see they have not come with a good argument that nationalism (properly defined and distinguished from various benign forms of national pride) is less bad. Yet the question seems not to occur to many voters.

At a minimum, people outside of a precisely defined circle of "Americans" should get some consideration. This includes potential immigrants and those they would leave behind. It strikes me as strange that much of the support for MAGA ideology comes from self-professed Christians, who seem to define that in terms of opposition to abortion -- only ambiguously supported in the Bible -- while ignoring the very explicit demand to help those who suffer, worth quoting from Matthew 25 (NIV):

[The King will say] "For I was hungry and you gave me something to eat, I was thirsty and you gave me something to drink, I was a stranger and you invited me in, I needed clothes and you clothed me, I was sick and you looked after me, I was in prison and you came to visit me.

"Then the righteous will answer him, 'Lord, when did we see you hungry and feed you, or thirsty and give you something to drink? When did we see you a stranger and invite you in, or needing clothes and clothe you? When did we see you sick or in prison and go to visit you?'

"The King will reply, 'I tell you the truth, whatever you did for one of the least of these brothers of mine, you did for me.'"

On AOT, before one acts in a way that affects others, it seems reasonable to think just a little about whether you are choosing the best option. Thinking in this case involves looking for reasons why you might be wrong. Sometimes this requires looking at what others have to say. It seems that MAGA voters upset about the price of food and housing reasoned simply that these came under Biden, so Trump would solve the problems. They did not ask how he would do that, or what Biden did to create the problem. If they had asked, and looked how Trump's proposals were covered in news sources, they would find good arguments that these proposals would likely make the situation worse.

Moralism, as I have used the term, is the attempt to impose questionable moral principles on others who question or oppose those principles on rational grounds. The principles are often justified on the basis of appeal to religious authority that must be assumed on faith to be correct. The principles may concern both behavior and motivation. Opposition to homosexuality, for example, often takes the form of regarding both homosexual behavior and homosexual desires as immoral. Such principles must be followed even when the consequences of following them, for the frustration of harmless desires, are clearly worse. Moralistic principles are often at the center of political movements toward theocracy. Islamism in Iran is the clearest example, but the same tendency is found in Indian Hindutva and, notably, American Christian nationalism, which is part of the MAGA coalition.

Just as the dark triad traits overlap and reinforce each others, the three dispositions I have listed do the same, as do their positive reflections. Education for AOT could lead people to question the moral relevance of nationality, or at least make them more receptive to questions from others. And it could reduce their confidence in moralistic doctrines, given their weak foundation. Questioning of parochialism and moralism as principles could, if properly presented, serve as examples of reflective thought in general and thus encourage AOT.

In sum, I think that a lot of Trump's support came from thinking that is wrong and immoral. This sort of immorality does not call for punishment but rather prevention.

 

Saturday, October 5, 2024

Why Democrats are generally more consistent with utilitarianism

Recently I was asked about a study of politically tilted publications in the social sciences, including psychology. I started to look at my own publications. I realized that most of them are not political in the partisan sense, but many have concerned public policy issues. If they have had a tilt, it is strongly toward utilitarianism, but that is not a political party.

Given my current obsession with the looming U.S. election, I would like to say why I would support Democratic candidates at the national level even if Donald Trump were not a serious danger to the world. I think this view follows from utilitarianism.

The essence of the modern Democratic Party is still the views of F. D. Roosevelt. He thought it was the responsibility of government to improve the welfare of "the people", and these people even included foreigners. This view has been at the core of Democratic politics all along.

The role of government is justified in utilitarianism in several ways. First, government can solve social dilemmas by penalizing defectors. Much of the law is about these penalties. For example, the government maintains the institution of private property by punishing those defectors who try to steal or destroy it for personal benefit. But governments also establish environmental regulations, safety regulations, laws about disclosure, and so on. These regulations are sometimes controversial, and Democrats usually favor them. (Regulations can be excessive, and Republicans in the past had a useful function of trying to fix them.)

Governments also redistribute money from rich to poor (to varying extents). This improves total utility, up to a point, because money has more utility for the poor than for the rich. Once people spend about $100,000 per person on the basic necessities, additional spending tends to go toward luxuries that provide less utility per dollar.

Redistribution may be accomplished in many ways, including progressive taxation, direct handouts to some poor people (negative taxes), and direct provision of services such as health care, education and housing that provide the means for the poor to earn money themselves. Democrats favor these efforts.

Another redistributive function of government is more subtle, perhaps more a function of social norms than laws or regulations. It concerns the uses of labor. When the distribution of spending power is extremely large, the "rich" (those with lots of it) are free to spend money on goods and services that provide very little utility, since they have excess money once they have set up the basic things that everyone would want. With more redistribution, labor would be more efficient in terms of utility production, rather than production of "economic value." The latter is distorted. A $10,000 Rolex watch has over 25 times the economic value of my $35 Timex, and maybe more like 1.25 times the utility, if that. But that Rolex requires lots of labor, not just in the production facilities in Switzerland but also n the mining and selection of materials. We thus have hundreds of people, some with considerable technical skills, working to produce very little utility. The same may be said of tax lawyers who help rich people minimize their tax bills. Probably some of these people could do a lot more good as high-school teachers, or civil servants who craft the laws and regulations that the lawyers have to work around. With fewer rich people, and with a social norm of doing your share without shirking, and not being to much of a "pig" about the way you spend money, total utility would increase. Democrats tend to support this social norm, while Republicans tend to favor a norm of flaunting wealth.

The current Democratic party is less isolationist than the current Republican party. This was not always true. But isolationism in general means giving little moral weight to foreigners. Utilitarians are not the only ones who think that all people deserve something approaching equal consideration. The two big issues in the current election are global.

First is climate change. The U.S. cannot solve the problem on our own, but we can at least do our share and set an example for others. We are doing that, more or less. But Trump would remove us from all international agreements and repeal many of the laws and regulations designed to speed the energy transition from fossil fuels.

Second is Putin. Trump shows every sign of allowing Putin to win enough in Ukraine so that the Russian and Chinese people are convinced that that further efforts to build empires through military force are likely to succeed with bearable costs.

It is disturbing that these two issues play such a small role in the campaign. Voters say they are concerned about inflation, so opinion polls ask about inflation but not about climate or Ukraine, and news reports don't mention these as issues. And neither do the Democratic candidates. People get the idea that we are supposed to be concerned about local issues. But perhaps some of those undecided voters just haven't thought about the big world issues. It may seem difficult to decide who will do better at bringing down prices. But it is not difficult at all to decide how to vote if you care just a little about the rest of the world.


Thursday, October 3, 2024

How voters think about voting

In several papers, including one recent paper that has been rejected by several philosophy journals, I have analyzed people's approach to voting in terms of whether they consider the good of "the world" (cosmopolitan voting), their nation, or themselves. It seems that all three ways of thinking exist, perhaps even co-exist within the same person. I have argued that, for most people, voting out of self-interest alone is irrational, but cosmopolitan voting is usually rational if the voter cares somewhat about the good of other people. This argument is apparently not a great insight for philosophers, and that is why the paper keeps getting rejected. However, the rationality of cosmopolitan voting, and the irrationality of self-interest voting, are apparently not ideas that most voters, politicians and journalists have considered.

In looking at what people say about voting, especially "undecided" voters in the upcoming U.S. election, I see another classification of how people think about voting for candidates, cutting across the one just described. This distinction is based on what people attend to.

The stated or likely policies of the candidate are one focus of attention. Candidates state policies to appeal to voters who think this way. The stated policies are usually selected for appeal to voters rather than experts. Policies may still be valuable indications of a candidates general approach to issues. This strategy can be harmful when voters focus on some single issue, such as inflation, or support of Israel, as the basis of their decisions.

Another focus is the character of the candidate, especially at the traits that would make a good office holder, according to the voter. These are not necessarily the same traits that would make a good co-worker or spouse. These voters might feel that policy statements are not very predictive of what will actually happen, and it is better to choose the sort of person who can deal with unforeseen problems. For example, some Trump supporters think he will be "tough" with other countries (ignoring prior harmful effects of such toughness such as Iran's nuclear ambitions).

A third focus is blind empiricism. Voters look at how things were (for themselves, for their nation, or for the world, but mostly for themselves) under the candidate's previous administration, or that of the candidate's party. ("Things were better for me when Trump was in power.") In the long run, this strategy might be slightly better than chance at picking good candidates by any criterion. But I think it actually represents a kind of laziness, and unwillingness to consider either policies or character.

More generally, people don't seem to have given much thought to the question of how they should approach citizenship. This question is not part of the civics curriculum. The right to vote, which comes with citizenship, implies a duty to vote thoughtfully, and, more generally, to take responsibility for the actions of one's government. (The utilitarian justification for this principle of duty as a general social norm is clear.) For national elections, these actions affect residents of the nation, foreigners, and people not yet born.


Tuesday, June 18, 2024

"Children of a modest star"

 I do not have time to read many books these days. But I managed to finish "Children of a modest star" (Jonathan S. Blake and Nils Gilman), which was highly recommended by a review in Science: https://www.science.org/doi/10.1126/science.ado2345.  (The title is from a poem by W.H. Auden.)

I would say that it is about political theory, or political philosophy, but the authors draw on their extensive knowledge of intellectual history. They argue that the idea of the nation state as the main container of sovereignty is a fairly recent idea that is already under attack. This idea appealed to me because I have written a fair but about the vices of nationalism and the virtues of cosmpolitanism. But the book goes beyond this polarity by arguing for a principle of "subsidarity", which holds that problems should be handled by the lowest political subdivision capable of handling them, which is often sub-national. It still argues for a "planetary" level of government, with enforcement power, as necessary for such issues as climate change, preparedness for pandemics, and biodiversity.  It is not, and does not claim to be, a fully worked out plan for how things would work in the future it proposes. It presents a rough vision of where we should be headed, and even how we might get from here to there.

Within the community of card-carrying utilitarians, I have been suspicious of "longtermism" as recommended by William MacAskill and others. It is too easy to come up with some fantasy about the long-term future, like that of one person I knew who argued that practically all of our extra resources should be spent trying to find ways to stop a large asteroid from hitting the earth, since, if we don't solve this problem, it is inevitable that this will happen eventually. (I think that we humans now have the capacity to deal with this problem, although I don't know how large an asteroid we could deflect.) Thus, I have thought that the most sensible utilitarian approach to government is to look for incremental improvements in the situation, without worrying too much about their long-term effects, which are difficult to predict. It makes sense to reduce CO2 emissions even if it turns out that, in a few decades, we will use fusion power from a single site to pull CO2 right out of the atmosphere.

Still, I found the book an answer to the question: If you want to consider the long term future, what aspect of it is most relevant? The answer is to look at effects on governance.

The prose is incredibly good. Almost every page has something you put on a t shirt. Although you can read through the 215 pages of text as if this were simply a political manifesto without much anchoring in prior literature, it has 69 pages of footnotes at the end. All the ideas are credited to writings that inspired or preceded them.


Tuesday, April 16, 2024

Existential risks from AI?

A recent Policy Forum article in Science argues for banning certain uses of artificial intelligence (AI) (Michael K. Cohen et al., Regulating advanced artificial agents. Science 384,36-38 (2024). DOI:10.1126/science.adl0625). The authors particularly worry about agents that use reinforcement learning (RL).

RL agents "receive perceptual inputs and take actions, and certain inputs are typically designated as 'rewards.' An RL agent then aims to select actions that it expects will lead to higher rewards. For example, by designating money as a reward, one could train an RL agent to maximize profit on an online retail platform." The authors worry that "a sufficiently capable RL agent could take control of its rewards, which would give it the incentive to secure maximal reward single-mindedly" by manipulating its environment. For example, "One path to maximizing long-term reward involves an RL agent acquiring extensive resources and taking control over all human infrastructure, which would allow it to manipulate its own reward free from human interference."

I may be missing something here, but it seems to me that the authors mis-characterize RL. In psychology, reinforcement learning does not require that the organism (or machine) place any value on reinforcement. The process would work just as well if a reinforcement ("reward") were simply an increase in the probability of the response that led to it, and a "punishment" were simply a decrease. The organism does not "try" to seek rewards or avoid punishments in general. It just responds to stimuli (situations) from a menu of possible responses, each with some response strength. The strength of a response, relative to alternative responses, determines its probability of being emitted. "Reward" and "punishment" are terms that result from excessive anthropomorphization.

It would of course be possible to build an AI system with a sense of self-interest, in which positive reinforcements were valued and purposefully sought, independently of their role in shaping behavior. But this system would not do any better at the task it is given. It might do worse, because it could be distracted by searches for other sources of "reward", as Cohen et al. suggest.

If, for some reason, AI engineers thought that a sense of self-interest would be useful, they could design a system with such a sense. It would need a feature of each possible outcome indicating its overall consistency with long-term goals (including the goal of having good experiences). And it would have to represent those goals, and processes for changing these goals and their relative strengths.

Engineers could also build in a sense of morality, so that a decision-making AI system would, like most real people, consider effects on others as well as on the self. In general, options would be favored more when they had better (or less bad) outcomes for others, and when they had better (or less bad) outcomes for the self.  Effects on others would be estimated in the same way as effects on the self, in terms of the consistency of outcomes with long-term goals.  Such a sense of morality could even work more reliably than it does in humans. The functional form of the self/others trade-off could be set in advance, so that psychopathy, which gives too little relative weight to effects on others, would be avoided.

If self-interest is to be included, then morality should be included too. It is difficult to see why an engineer would intentionally build a system with self-interest unchecked by morality. That seems to be the sort of system that Cohen et al. imagine.


Algorithm aversion and AI

Recently many people have expressed concerns, some to the point of near panic, about recent advances in artificial intelligence (AI). They think AI can now do great harm, even to the point of ending civilization as we know it. Some of these harms are obvious and also difficult to prevent. Autocrats and other bad actors - such as people who now create phishing sites or ransomware - will use AI software to do their jobs better, just as governments, scientists, law enforcers, and businesses of all sorts will do the same for their respective jobs. Identification of individuals, for purposes of harassing them, will become easier, just as the Internet itself made this, and much else, good and bad, easier. Other technologies, such as the telephone, postal system, and telegraph, have also been used for nefarious purposes (as in "wire fraud" and "mail fraud"). The white hats will continue to fight the black hats, often with the same weapons.

Of special concern is the use of AI to make decisions about people, such as whether to give them loans, hire them for jobs, admit them to educational institutions, incarcerate them, treat them for illness, or cover the cost of such treatment. The concerns seem to involve two separate problems: one is that AI systems make errors; the other is that they could be biased against groups that already suffer from the effects of other biases, such as Blacks in the U.S.

The problem of errors in AI is part of another problem that has a large literature in psychology, beginning with Paul Meehl's "Clinical and statistical prediction" (1954) and then followed up by Robyn Dawes, Hal Arkes, Ken Hammond, Jason Dana and many others. A general conclusion from that literature is that simple statistical models, such as multiple linear regression, are often more accurate at various classifications, such as diagnosing psychological disorders, than humans who are trained to make just such classifications and who make them repeatedly. This can be true even when the human has more information, such as a personal interview of a candidate for admission.

A second conclusion from the literature is that most people, including the judges and those who are affected, seem to prefer human judgments to statistical models. Students applying to selective colleges or graduate programs, for example, want someone to consider them as a whole person, without relying on statistical predictors. The same attitudes come up in medical diagnosis and treatment, although the antipathy to statistical models seems weaker in that area. Note that most of these statistical models are so simple that they could be applied with a pencil and paper by someone who remembers how to do arithmetic that way. Recent improvements in AI have resulted from the enhanced capacities of modern computers, which allows them to learn from huge number of examples how to make classifications correctly with much more complex formulas, so complex that the designers of the programs do not know what the formulas are. These models are better than those that can be applied on a small piece of paper, but the issues are much the same. If anything, the issues are more acute exactly because the models are better. If the older, simpler, models were better than humans, then these new ones are better still.

Note that, although some studies fail to find a preference for humans over computers on the average, such results do not result from all the subjects being indifferent between humans and computers. Rather, they reflect differences among the subjects. The average result can favor computers over humans if 40% of the subjects are opposed to computers. The existence of large minorities who oppose the use of AI can make adoption of AI models nearly as difficult as it would be if a majority were opposed, especially when the majority is vocal and organized.

AI models make errors. Before we reject or delay their use, we need to ask the fundamental question of all decision making: compared to what?  We often need to "accept error to make less error" (as Hillel Einhorn put it).

The same question is relevant for the bias problem. I put aside questions about how bias should be measured, and whether some apparent biases could result, fully or partially, from real differences in the most relevant populations. When AI tools seem to be biased, would the same be true when AI is not use? The bias might be larger still when decisions are made by individual human judges, or by some simpler formula.


Sunday, December 3, 2023

More on why I am not a fan of pre-registration

This is a draft follow-up to my earlier post on prediction, accomodation, and pre-registration: https://judgmentmisguided.blogspot.com/2018/05/prediction-accommodation-and-pre.html

I argued there that some of the appeal of pre-registration is the result of a philosophical mistake, the idea that prediction of a result is better than post-hoc accomodation of the result once it is found, holding constant the fit of the result to its explanation.

Here I comment on pre-registration from the perspective of editor of Judgment and Decision Making, a journal concerned largely with applied cognitive psychology. I try to answer some common points made by the defenders of pre-registration.

1. As editor, I find myself arguing with authors who pre-registered their data analysis, when I think that their pre-registration is just wrong. Typically, the pre-reg (pre-registration document) ignores our statistics guidelines at https://jbaron.org/journal/stat.htm. For example, it proposes some sort of statistical control, or test for removable interactions. Although it is true that authors do not need to do just what they say in the pre-reg, they must still explain why they changed, and some authors still want to fully report both their pre-registered analysis and what I think is the correct one.

I don't see why pre-registration matters here. For example, one common issue is what to exclude from the main analysis. Often the pre-reg specifies what will be excluded, such as the longest 10\% of responses times, but I often judge this idea to be seriously inferior to something else, such as using a log transform. (The longest times may even reflect the most serious responding, and their outsized influence on statistics can usually be largely eliminate by transformation.)  The author might argue that both should be reported because the 10\% idea was thought of beforehand. But does it matter when you think of it? If it is such an obvious alternative to using the log, then you could think of it after collecting the data. (This is related to my blog post mentioned earlier.) If the main analysis will now be based on logs, it doesn't even matter if the decision to use 10\% was thought of after finding that it yielded clearer results (p-hacking).

2. It may be argued that pre-registration encourages researchers to think ahead. It might do that, but it would be a subtle effect, as it may lead to thinking about issues that would be considered anyway.

The most common failure to think ahead is to neglect alternative explanations of an expected result. You can find that in pre-registrations as well as submitted papers. Maybe pre-registration helps a little, like a nudge. But the most common alternative explanations I see seem to be things like reversed causality (mediator vs. DV), or third-variable causality, in mediation analysis. Pre-regs sometimes propose mediation analyses without thinking of these potential problems. Another common alternative explanation is that interactions are due to scaling effects (hence "removable"). I have never see anyone think of this in advance. Most people haven't heard of this problem (despite Loftus's 1978 paper in Memory and Cognition). Nor the problem with statistical control (again pointed out decades ago, by Kahneman among many others), which they also put in pre-regs.

3. Does pre-registration protect against p-hacking anyway?  Psychology papers are usually multi-study. You can pre-register one study at a time, and that is what I usually (always?) see. So you don't have to report the related studies you did that didn't work, even if you pre-registered each one, although honest reporting would do that anyway. This is a consequence of the more general problem that pre-registration does not require making public the results whether the study works or not. Unlike some clinical trials, you can pre-register a study, do it, find that the result fails to support the hypothesis tested, and put study in the file drawer. In principle, you can even pre-register two ways of doing the same study or analysis and then refer to the pre-registration that fits better when you write the paper. (I suspect that this has never happened. Unlike failing to report those studies that failed, this would probably be considered unethical. But, if a journal starts to REQUIRE pre-registration, the temptation might be much greater.)

4. What do you do to detect p-hacking, without a pre-reg?  I ask whether the analysis done is reasonable or whether some alternative approach would be much better. If a reasonable analysis yields p=.035 for the main hypothesis test, this is a weak result anyway, and it doesn't matter whether it was chosen because some other reasonable analysis yielded p=.051. Weak results are often so strongly consistent with what we already know that they are still very likely to be real. If they are surprising, it is time to ask for another study. Rarely, I find that it is helpful to look at the data; this sometimes happens when the result is reasonable but the analysis looks contrived, so I wonder what is going on.

Pre-registration inhibits the very helpful process of looking at the data before deciding how to proceed with the analysis. This exploration is so much part of my own approach to research that I could not possibly pre-register anything about data analysis. For example, in determining exclusions I often look at something like the distribution of (mean log) response times for the responses to individual items. I often find a cluster of very fast responders, separate from the rest. Sometimes the subjects in these clusters give the same response to every question, or their responses are insensitive to compelling variations that ought to affect everyone. I do this before looking at the effects of removing these subjects on the final results.

5. It seems arrogant to put your own judgment ahead of the authors'.

When it comes to judging other people's papers as editor, I think that relationship between author and editor is not one of equality. I do not need to give equal weight to the author's judgment as reflected in the pre-reg, just as I do not need to give equal weight to the reviewers' opinions and my own.

When I handle a paper, I am the editor. It is my job to enforce my own standards, not to enforce some consensus in which everyone is equal. (I see no point in that when the apparent consensus produces so much junk. "Peer reviewers" are in such demand that any warm body will do. The situation is worst in grant-review panels, which usually don't have an "editor" in charge.) Some editors are better than others. There is nothing I can do about that. (Two of the best were Ray Nickerson and Frank Yates, both of whom died recently.) Journal editors are like judges in the legal system, CEOs of corporations, or deans of universities. They are given special status. We hope they live up to their status.