Saturday, October 5, 2024

Why Democrats are generally more consistent with utilitarianism

Recently I was asked about a study of politically tilted publications in the social sciences, including psychology. I started to look at my own publications. I realized that most of them are not political in the partisan sense, but many have concerned public policy issues. If they have had a tilt, it is strongly toward utilitarianism, but that is not a political party.

Given my current obsession with the looming U.S. election, I would like to say why I would support Democratic candidates at the national level even if Donald Trump were not a serious danger to the world. I think this view follows from utilitarianism.

The essence of the modern Democratic Party is still the views of F. D. Roosevelt. He thought it was the responsibility of government to improve the welfare of "the people", and these people even included foreigners. This view has been at the core of Democratic politics all along.

The role of government is justified in utilitarianism in several ways. First, government can solve social dilemmas by penalizing defectors. Much of the law is about these penalties. For example, the government maintains the institution of private property by punishing those defectors who try to steal or destroy it for personal benefit. But governments also establish environmental regulations, safety regulations, laws about disclosure, and so on. These regulations are sometimes controversial, and Democrats usually favor them. (Regulations can be excessive, and Republicans in the past had a useful function of trying to fix them.)

Governments also redistribute money from rich to poor (to varying extents). This improves total utility, up to a point, because money has more utility for the poor than for the rich. Once people spend about $100,000 per person on the basic necessities, additional spending tends to go toward luxuries that provide less utility per dollar.

Redistribution may be accomplished in many ways, including progressive taxation, direct handouts to some poor people (negative taxes), and direct provision of services such as health care, education and housing that provide the means for the poor to earn money themselves. Democrats favor these efforts.

Another redistributive function of government is more subtle, perhaps more a function of social norms than laws or regulations. It concerns the uses of labor. When the distribution of spending power is extremely large, the "rich" (those with lots of it) are free to spend money on goods and services that provide very little utility, since they have excess money once they have set up the basic things that everyone would want. With more redistribution, labor would be more efficient in terms of utility production, rather than production of "economic value." The latter is distorted. A $10,000 Rolex watch has over 25 times the economic value of my $35 Timex, and maybe more like 1.25 times the utility, if that. But that Rolex requires lots of labor, not just in the production facilities in Switzerland but also n the mining and selection of materials. We thus have hundreds of people, some with considerable technical skills, working to produce very little utility. The same may be said of tax lawyers who help rich people minimize their tax bills. Probably some of these people could do a lot more good as high-school teachers, or civil servants who craft the laws and regulations that the lawyers have to work around. With fewer rich people, and with a social norm of doing your share without shirking, and not being to much of a "pig" about the way you spend money, total utility would increase. Democrats tend to support this social norm, while Republicans tend to favor a norm of flaunting wealth.

The current Democratic party is less isolationist than the current Republican party. This was not always true. But isolationism in general means giving little moral weight to foreigners. Utilitarians are not the only ones who think that all people deserve something approaching equal consideration. The two big issues in the current election are global.

First is climate change. The U.S. cannot solve the problem on our own, but we can at least do our share and set an example for others. We are doing that, more or less. But Trump would remove us from all international agreements and repeal many of the laws and regulations designed to speed the energy transition from fossil fuels.

Second is Putin. Trump shows every sign of allowing Putin to win enough in Ukraine so that the Russian and Chinese people are convinced that that further efforts to build empires through military force are likely to succeed with bearable costs.

It is disturbing that these two issues play such a small role in the campaign. Voters say they are concerned about inflation, so opinion polls ask about inflation but not about climate or Ukraine, and news reports don't mention these as issues. And neither do the Democratic candidates. People get the idea that we are supposed to be concerned about local issues. But perhaps some of those undecided voters just haven't thought about the big world issues. It may seem difficult to decide who will do better at bringing down prices. But it is not difficult at all to decide how to vote if you care just a little about the rest of the world.


Thursday, October 3, 2024

How voters think about voting

In several papers, including one recent paper that has been rejected by several philosophy journals, I have analyzed people's approach to voting in terms of whether they consider the good of "the world" (cosmopolitan voting), their nation, or themselves. It seems that all three ways of thinking exist, perhaps even co-exist within the same person. I have argued that, for most people, voting out of self-interest alone is irrational, but cosmopolitan voting is usually rational if the voter cares somewhat about the good of other people. This argument is apparently not a great insight for philosophers, and that is why the paper keeps getting rejected. However, the rationality of cosmopolitan voting, and the irrationality of self-interest voting, are apparently not ideas that most voters, politicians and journalists have considered.

In looking at what people say about voting, especially "undecided" voters in the upcoming U.S. election, I see another classification of how people think about voting for candidates, cutting across the one just described. This distinction is based on what people attend to.

The stated or likely policies of the candidate are one focus of attention. Candidates state policies to appeal to voters who think this way. The stated policies are usually selected for appeal to voters rather than experts. Policies may still be valuable indications of a candidates general approach to issues. This strategy can be harmful when voters focus on some single issue, such as inflation, or support of Israel, as the basis of their decisions.

Another focus is the character of the candidate, especially at the traits that would make a good office holder, according to the voter. These are not necessarily the same traits that would make a good co-worker or spouse. These voters might feel that policy statements are not very predictive of what will actually happen, and it is better to choose the sort of person who can deal with unforeseen problems. For example, some Trump supporters think he will be "tough" with other countries (ignoring prior harmful effects of such toughness such as Iran's nuclear ambitions).

A third focus is blind empiricism. Voters look at how things were (for themselves, for their nation, or for the world, but mostly for themselves) under the candidate's previous administration, or that of the candidate's party. ("Things were better for me when Trump was in power.") In the long run, this strategy might be slightly better than chance at picking good candidates by any criterion. But I think it actually represents a kind of laziness, and unwillingness to consider either policies or character.

More generally, people don't seem to have given much thought to the question of how they should approach citizenship. This question is not part of the civics curriculum. The right to vote, which comes with citizenship, implies a duty to vote thoughtfully, and, more generally, to take responsibility for the actions of one's government. (The utilitarian justification for this principle of duty as a general social norm is clear.) For national elections, these actions affect residents of the nation, foreigners, and people not yet born.


Tuesday, June 18, 2024

"Children of a modest star"

 I do not have time to read many books these days. But I managed to finish "Children of a modest star" (Jonathan S. Blake and Nils Gilman), which was highly recommended by a review in Science: https://www.science.org/doi/10.1126/science.ado2345.  (The title is from a poem by W.H. Auden.)

I would say that it is about political theory, or political philosophy, but the authors draw on their extensive knowledge of intellectual history. They argue that the idea of the nation state as the main container of sovereignty is a fairly recent idea that is already under attack. This idea appealed to me because I have written a fair but about the vices of nationalism and the virtues of cosmpolitanism. But the book goes beyond this polarity by arguing for a principle of "subsidarity", which holds that problems should be handled by the lowest political subdivision capable of handling them, which is often sub-national. It still argues for a "planetary" level of government, with enforcement power, as necessary for such issues as climate change, preparedness for pandemics, and biodiversity.  It is not, and does not claim to be, a fully worked out plan for how things would work in the future it proposes. It presents a rough vision of where we should be headed, and even how we might get from here to there.

Within the community of card-carrying utilitarians, I have been suspicious of "longtermism" as recommended by William MacAskill and others. It is too easy to come up with some fantasy about the long-term future, like that of one person I knew who argued that practically all of our extra resources should be spent trying to find ways to stop a large asteroid from hitting the earth, since, if we don't solve this problem, it is inevitable that this will happen eventually. (I think that we humans now have the capacity to deal with this problem, although I don't know how large an asteroid we could deflect.) Thus, I have thought that the most sensible utilitarian approach to government is to look for incremental improvements in the situation, without worrying too much about their long-term effects, which are difficult to predict. It makes sense to reduce CO2 emissions even if it turns out that, in a few decades, we will use fusion power from a single site to pull CO2 right out of the atmosphere.

Still, I found the book an answer to the question: If you want to consider the long term future, what aspect of it is most relevant? The answer is to look at effects on governance.

The prose is incredibly good. Almost every page has something you put on a t shirt. Although you can read through the 215 pages of text as if this were simply a political manifesto without much anchoring in prior literature, it has 69 pages of footnotes at the end. All the ideas are credited to writings that inspired or preceded them.


Tuesday, April 16, 2024

Existential risks from AI?

A recent Policy Forum article in Science argues for banning certain uses of artificial intelligence (AI) (Michael K. Cohen et al., Regulating advanced artificial agents. Science 384,36-38 (2024). DOI:10.1126/science.adl0625). The authors particularly worry about agents that use reinforcement learning (RL).

RL agents "receive perceptual inputs and take actions, and certain inputs are typically designated as 'rewards.' An RL agent then aims to select actions that it expects will lead to higher rewards. For example, by designating money as a reward, one could train an RL agent to maximize profit on an online retail platform." The authors worry that "a sufficiently capable RL agent could take control of its rewards, which would give it the incentive to secure maximal reward single-mindedly" by manipulating its environment. For example, "One path to maximizing long-term reward involves an RL agent acquiring extensive resources and taking control over all human infrastructure, which would allow it to manipulate its own reward free from human interference."

I may be missing something here, but it seems to me that the authors mis-characterize RL. In psychology, reinforcement learning does not require that the organism (or machine) place any value on reinforcement. The process would work just as well if a reinforcement ("reward") were simply an increase in the probability of the response that led to it, and a "punishment" were simply a decrease. The organism does not "try" to seek rewards or avoid punishments in general. It just responds to stimuli (situations) from a menu of possible responses, each with some response strength. The strength of a response, relative to alternative responses, determines its probability of being emitted. "Reward" and "punishment" are terms that result from excessive anthropomorphization.

It would of course be possible to build an AI system with a sense of self-interest, in which positive reinforcements were valued and purposefully sought, independently of their role in shaping behavior. But this system would not do any better at the task it is given. It might do worse, because it could be distracted by searches for other sources of "reward", as Cohen et al. suggest.

If, for some reason, AI engineers thought that a sense of self-interest would be useful, they could design a system with such a sense. It would need a feature of each possible outcome indicating its overall consistency with long-term goals (including the goal of having good experiences). And it would have to represent those goals, and processes for changing these goals and their relative strengths.

Engineers could also build in a sense of morality, so that a decision-making AI system would, like most real people, consider effects on others as well as on the self. In general, options would be favored more when they had better (or less bad) outcomes for others, and when they had better (or less bad) outcomes for the self.  Effects on others would be estimated in the same way as effects on the self, in terms of the consistency of outcomes with long-term goals.  Such a sense of morality could even work more reliably than it does in humans. The functional form of the self/others trade-off could be set in advance, so that psychopathy, which gives too little relative weight to effects on others, would be avoided.

If self-interest is to be included, then morality should be included too. It is difficult to see why an engineer would intentionally build a system with self-interest unchecked by morality. That seems to be the sort of system that Cohen et al. imagine.


Algorithm aversion and AI

Recently many people have expressed concerns, some to the point of near panic, about recent advances in artificial intelligence (AI). They think AI can now do great harm, even to the point of ending civilization as we know it. Some of these harms are obvious and also difficult to prevent. Autocrats and other bad actors - such as people who now create phishing sites or ransomware - will use AI software to do their jobs better, just as governments, scientists, law enforcers, and businesses of all sorts will do the same for their respective jobs. Identification of individuals, for purposes of harassing them, will become easier, just as the Internet itself made this, and much else, good and bad, easier. Other technologies, such as the telephone, postal system, and telegraph, have also been used for nefarious purposes (as in "wire fraud" and "mail fraud"). The white hats will continue to fight the black hats, often with the same weapons.

Of special concern is the use of AI to make decisions about people, such as whether to give them loans, hire them for jobs, admit them to educational institutions, incarcerate them, treat them for illness, or cover the cost of such treatment. The concerns seem to involve two separate problems: one is that AI systems make errors; the other is that they could be biased against groups that already suffer from the effects of other biases, such as Blacks in the U.S.

The problem of errors in AI is part of another problem that has a large literature in psychology, beginning with Paul Meehl's "Clinical and statistical prediction" (1954) and then followed up by Robyn Dawes, Hal Arkes, Ken Hammond, Jason Dana and many others. A general conclusion from that literature is that simple statistical models, such as multiple linear regression, are often more accurate at various classifications, such as diagnosing psychological disorders, than humans who are trained to make just such classifications and who make them repeatedly. This can be true even when the human has more information, such as a personal interview of a candidate for admission.

A second conclusion from the literature is that most people, including the judges and those who are affected, seem to prefer human judgments to statistical models. Students applying to selective colleges or graduate programs, for example, want someone to consider them as a whole person, without relying on statistical predictors. The same attitudes come up in medical diagnosis and treatment, although the antipathy to statistical models seems weaker in that area. Note that most of these statistical models are so simple that they could be applied with a pencil and paper by someone who remembers how to do arithmetic that way. Recent improvements in AI have resulted from the enhanced capacities of modern computers, which allows them to learn from huge number of examples how to make classifications correctly with much more complex formulas, so complex that the designers of the programs do not know what the formulas are. These models are better than those that can be applied on a small piece of paper, but the issues are much the same. If anything, the issues are more acute exactly because the models are better. If the older, simpler, models were better than humans, then these new ones are better still.

Note that, although some studies fail to find a preference for humans over computers on the average, such results do not result from all the subjects being indifferent between humans and computers. Rather, they reflect differences among the subjects. The average result can favor computers over humans if 40% of the subjects are opposed to computers. The existence of large minorities who oppose the use of AI can make adoption of AI models nearly as difficult as it would be if a majority were opposed, especially when the majority is vocal and organized.

AI models make errors. Before we reject or delay their use, we need to ask the fundamental question of all decision making: compared to what?  We often need to "accept error to make less error" (as Hillel Einhorn put it).

The same question is relevant for the bias problem. I put aside questions about how bias should be measured, and whether some apparent biases could result, fully or partially, from real differences in the most relevant populations. When AI tools seem to be biased, would the same be true when AI is not use? The bias might be larger still when decisions are made by individual human judges, or by some simpler formula.


Sunday, December 3, 2023

More on why I am not a fan of pre-registration

This is a draft follow-up to my earlier post on prediction, accomodation, and pre-registration: https://judgmentmisguided.blogspot.com/2018/05/prediction-accommodation-and-pre.html

I argued there that some of the appeal of pre-registration is the result of a philosophical mistake, the idea that prediction of a result is better than post-hoc accomodation of the result once it is found, holding constant the fit of the result to its explanation.

Here I comment on pre-registration from the perspective of editor of Judgment and Decision Making, a journal concerned largely with applied cognitive psychology. I try to answer some common points made by the defenders of pre-registration.

1. As editor, I find myself arguing with authors who pre-registered their data analysis, when I think that their pre-registration is just wrong. Typically, the pre-reg (pre-registration document) ignores our statistics guidelines at https://jbaron.org/journal/stat.htm. For example, it proposes some sort of statistical control, or test for removable interactions. Although it is true that authors do not need to do just what they say in the pre-reg, they must still explain why they changed, and some authors still want to fully report both their pre-registered analysis and what I think is the correct one.

I don't see why pre-registration matters here. For example, one common issue is what to exclude from the main analysis. Often the pre-reg specifies what will be excluded, such as the longest 10\% of responses times, but I often judge this idea to be seriously inferior to something else, such as using a log transform. (The longest times may even reflect the most serious responding, and their outsized influence on statistics can usually be largely eliminate by transformation.)  The author might argue that both should be reported because the 10\% idea was thought of beforehand. But does it matter when you think of it? If it is such an obvious alternative to using the log, then you could think of it after collecting the data. (This is related to my blog post mentioned earlier.) If the main analysis will now be based on logs, it doesn't even matter if the decision to use 10\% was thought of after finding that it yielded clearer results (p-hacking).

2. It may be argued that pre-registration encourages researchers to think ahead. It might do that, but it would be a subtle effect, as it may lead to thinking about issues that would be considered anyway.

The most common failure to think ahead is to neglect alternative explanations of an expected result. You can find that in pre-registrations as well as submitted papers. Maybe pre-registration helps a little, like a nudge. But the most common alternative explanations I see seem to be things like reversed causality (mediator vs. DV), or third-variable causality, in mediation analysis. Pre-regs sometimes propose mediation analyses without thinking of these potential problems. Another common alternative explanation is that interactions are due to scaling effects (hence "removable"). I have never see anyone think of this in advance. Most people haven't heard of this problem (despite Loftus's 1978 paper in Memory and Cognition). Nor the problem with statistical control (again pointed out decades ago, by Kahneman among many others), which they also put in pre-regs.

3. Does pre-registration protect against p-hacking anyway?  Psychology papers are usually multi-study. You can pre-register one study at a time, and that is what I usually (always?) see. So you don't have to report the related studies you did that didn't work, even if you pre-registered each one, although honest reporting would do that anyway. This is a consequence of the more general problem that pre-registration does not require making public the results whether the study works or not. Unlike some clinical trials, you can pre-register a study, do it, find that the result fails to support the hypothesis tested, and put study in the file drawer. In principle, you can even pre-register two ways of doing the same study or analysis and then refer to the pre-registration that fits better when you write the paper. (I suspect that this has never happened. Unlike failing to report those studies that failed, this would probably be considered unethical. But, if a journal starts to REQUIRE pre-registration, the temptation might be much greater.)

4. What do you do to detect p-hacking, without a pre-reg?  I ask whether the analysis done is reasonable or whether some alternative approach would be much better. If a reasonable analysis yields p=.035 for the main hypothesis test, this is a weak result anyway, and it doesn't matter whether it was chosen because some other reasonable analysis yielded p=.051. Weak results are often so strongly consistent with what we already know that they are still very likely to be real. If they are surprising, it is time to ask for another study. Rarely, I find that it is helpful to look at the data; this sometimes happens when the result is reasonable but the analysis looks contrived, so I wonder what is going on.

Pre-registration inhibits the very helpful process of looking at the data before deciding how to proceed with the analysis. This exploration is so much part of my own approach to research that I could not possibly pre-register anything about data analysis. For example, in determining exclusions I often look at something like the distribution of (mean log) response times for the responses to individual items. I often find a cluster of very fast responders, separate from the rest. Sometimes the subjects in these clusters give the same response to every question, or their responses are insensitive to compelling variations that ought to affect everyone. I do this before looking at the effects of removing these subjects on the final results.

5. It seems arrogant to put your own judgment ahead of the authors'.

When it comes to judging other people's papers as editor, I think that relationship between author and editor is not one of equality. I do not need to give equal weight to the author's judgment as reflected in the pre-reg, just as I do not need to give equal weight to the reviewers' opinions and my own.

When I handle a paper, I am the editor. It is my job to enforce my own standards, not to enforce some consensus in which everyone is equal. (I see no point in that when the apparent consensus produces so much junk. "Peer reviewers" are in such demand that any warm body will do. The situation is worst in grant-review panels, which usually don't have an "editor" in charge.) Some editors are better than others. There is nothing I can do about that. (Two of the best were Ray Nickerson and Frank Yates, both of whom died recently.) Journal editors are like judges in the legal system, CEOs of corporations, or deans of universities. They are given special status. We hope they live up to their status.


Tuesday, August 8, 2023

First impressions of ChatGPT

Here are two examples of my first interactions with ChatGPT or some similar system. In the first, I asked it to construct a new moral dilemma for an experiment. I had in mind the bicycle dilemma discussed by R. M. Hare in "Moral thinking" (1981, section 6.2) concerning whether the actor should move another person's bicycle (a minor inconvenience for the bicycle owner) in order to park a car (to avoid a greater inconvenience of finding another parking space), I may use both now. This one has to be edited a bit, but it does the job, and I never saw anything like it before. I was impressed.

=======================================================

User

I would like a moral dilemma in which the right answer is for me to be selfish because the benefit to me is greater than the harm to someone else.

ChatGPT

Alright, here's a hypothetical scenario that might suit your request:

The Dilemma of the Scarce Scholarship

You've been working exceptionally hard throughout high school, and it's finally paid off. You have the grades, the extracurricular activities, and the teacher recommendations to qualify for the most prestigious scholarship at your dream university. This scholarship covers full tuition, accommodation, and also provides a substantial monthly stipend. It's a once-in-a-lifetime opportunity that could completely change your future.

However, you're not the only one who qualifies for this scholarship. Your best friend, Alex, also qualifies. Alex comes from a more privileged background than you. Their family could afford to send Alex to the university, albeit with some financial strain. Without the scholarship, you would have to take out substantial student loans to attend, and you might even have to give up on your dream university for a less expensive option.

One day, you stumble upon an opportunity. You find out about an obscure academic competition that the scholarship board holds in high regard. Participation in this competition could significantly boost your chances of getting the scholarship, but very few people know about it.

The moral dilemma is this: Do you tell Alex about the competition or keep it to yourself?

If you tell Alex, you keep your integrity intact, but you significantly increase the competition for the scholarship. Alex, with their greater resources, might even have a better shot at winning the competition.

If you don't tell Alex, you maximize your chances of securing the scholarship, but at the cost of acting dishonestly toward your friend.

The self-interested option (not telling Alex about the competition) might be justifiable in this case. The potential benefits for you are life-changing. Alex, although potentially disadvantaged by not knowing about the competition, would still have the opportunity to attend university without the burden of crippling debt.

================================================================

The second set resulted from a letter I got from Maximus, a company I never heard of, which apparently contracts with Medicare (the U.S. health insurance system for old people). The letter said that Maximus had been the victim of a data breach, and my records were included. They included enough information for someone to steal my identity. So I set off to find out whether the records were accurate (hoping that they were not). Simply changing my Medicare number would itself involve a fair bit of work, since many people have it, so I spent some time on it. The Medicare web site would not let me see the information it had about me. I called a phone number at Medicare and got to a human, who informed me that the letter was sent to me by mistake and my records were not part of the breach. I was so happy that I thanked him and said good bye. Later I got more upset at Maximus, so I called the number they said would reach a professional familiar with the breach. The "professional" turned out to be an automated chat, and it was no help.

Next, I used the "chat" function of Medicare itself, which kept a record of our conversation (below). I wanted to say bad things about Maximus. Below is the transcript. It now appears that corporations (Maximus) and major government agencies (Medicare) are using ChatGPT-type systems to replace humans on phone and help lines. I'm sure these things will get better, but this is not ready for prime time.

------------------- Medicare.gov Live Chat Transcript ---------------------

Live chat Date:8/5/2023

Start Time:11:14:13 am

End Time:11:28:06 am

-----------------------------------------------------------------------------

[11:14:13 am]: Thank you for contacting Medicare.gov Live Chat. 

[11:14:16 am]: Jonathan I want to report what I think is a seriously misleading and deceptive letter that I received from Maximus about my information being accessed. I would prefer to write an email, but I can't find anywhere to send it. I think that whoever wrote and approved this letter should be disciplined in some way. Should I proceed? Or can you give me an email address? 

[11:14:16 am]:  Please wait while you are connected to an agent. 

[11:14:19 am]: You are now connected with Medicare.gov Live Chat.

Thank you for contacting Medicare.gov Live Chat. My name is ulie. Can you please verify your first and last name? 

[11:14:42 am]: Jonathan Jonathan Baron 

[11:14:59 am]: Julie Hello, I will be happy to assist you with this today.  

[11:15:30 am]: Julie Who send you the letter? 

[11:16:32 am]: Jonathan At the top it said Maximus, and (on the right) CMS. The return address was in Claysburg, PA. 

[11:16:58 am]: Julie 

Thank you.

Let me look into this for you. 

[11:17:11 am]: Jonathan 'https://www.cms.gov/newsroom/press-releases/cms-responding-data-breach-contractor' is the letter I received. 

[11:19:05 am]: Julie I appreciate your patience.

You should have received a letter notifying you of a data breach. We have determined that your Medicare number was not compromised, and you don't need to request a new one. The letter provides more information about the incident and how it is being addressed. Please note that this does not have any impact on your current Medicare benefits or coverage.

[11:19:12 am]: Julie Maximus, a contractor for the Centers for Medicare & Medicaid Services, recently learned that their file transfer software, called MOVEit, was compromised. They are offering 24 months of credit monitoring and other services from Experian at no cost. You can call Experian to learn more; their phone number is included in your letter. This is not available through email.  

[11:23:50 am]: Jonathan This is just what the letter said. Yes, indeed, it was incorrect, and caused me to waste a lot of time. But there were other problems. Do you want to hear about those? One problem is that the further questions suggested calling a number 'staffed with professionals familiar with this incident'. In fact, the 'professional' turned out to be (pretty cleary) an AI program like ChatGPT. I treated it that way. (ChatGPT has a thick skin. It doesn't complain if you criticize it harshly.) It could not answer my questions. I hope you are not the same robot! 

[11:25:59 am]: Julie Do you have additional questions regarding the letter you received? 

[11:27:25 am]: Jonathan No. So clearly I am wasting my time once again. You have missed the point, but I'm not going to try to get you back on track. I wanted an email address. You are clearly not going to give me one. 

[11:27:54 am]: Julie Have a nice day. 

[11:28:06 am]: The chat session has ended. Thank you for contacting Medicare.gov Live Chat. Have a nice day.