Political Psychology recently published Chalmers et al 2022 "The rights of man: Libertarian concern for men's, but not women's, reproductive autonomy". The basis for this claim about libertarians' selective concern is indicated in the abstract as:

Libertarianism was associated with opposition to abortion rights and support for men's right both to prevent women from having abortions (male veto) and to withdraw financial support for a child when women refuse to terminate the pregnancy (financial abortion).

The above passage represents a flawed inferential method that I'll explain below.

---

The lead author of Chalmers et al 2022 quickly responded to my request about the availability of data, code, and codebooks, with replication materials now public at the OSF site. I'll use data from Study 2 and run a simple analysis to illustrate the inferential flaw.

The only predictor that I'll use is a 0-to-6 "Libert" variable that I renamed "Libertarianism" and recoded to range from 0 to 1 for responses to the item "To what extent would you describe your political persuasion as libertarian?", with 0 for "Not at all" to 1 "Very much".

---

In the OLS linear regression below, the abSINGLE outcome variable has eight levels, from 0 for "Not at all" to 1 for "Very much", for an item about whether the respondent thinks that a pregnant woman should be able to obtain a legal abortion if she is single and does not want to marry the man.

The linear regression output below (N=575) indicates that, on average, respondent libertarianism is negatively correlated with support for permitting a woman to have an abortion if she is single and does not want to marry the man.

. reg abSINGLE Libertarianism
---------------------------------
      abSINGLE |  Coef.  p-value
---------------+-----------------
Libertarianism | -0.30   0.000 
     intercept |  0.89   0.000 
---------------------------------

In the OLS linear regression below, the maleVETO outcome variable has six levels, from 0 for "Strongly disagree" to 1 for "Strongly agree", for an item about whether the respondent thinks that a woman should not be allowed to have an abortion if the man involved really wants to keep his unborn child.

The linear regression output below (N=575) indicates that, on average, respondent libertarianism is positively correlated with support for prohibiting a woman from having an abortion if the man involved really wants to keep his unborn child.

. reg maleVETO Libertarianism
--------------------------------
      maleVETO |  Coef. p-value
---------------+----------------
Libertarianism |  0.26  0.000 
     intercept |  0.13  0.000 
--------------------------------

So what's the flaw in combining results from these two regressions to infer that libertarians have a concern for men's reproductive autonomy but not for women's reproductive autonomy?

---

The flaw is that the linear regressions above include data from non-libertarians, and patterns among non-libertarians might account for the change in the sign of the coefficient on Libertarianism.

Note, for example, that, based on the OLS regression output, the predicted support among respondents highest in libertarianism will be 0.89 + -0.30, or 0.69, for women's right to an abortion on the 0-to-1 abSINGLE item, but will be 0.13 + 0.26, or 0.39, for men's right to an abortion veto on the 0-to-1 maleVETO item.

But let's forget these linear regression results, because the appropriate method for assessing whether a group is inconsistent is to analyze data only from that group. So here are respective means, for respondents at 6 on the 0-to-6 "Libert" variable (N=18):

0.45 on abSINGLE

0.49 on maleVETO

And here are respective means, for respondents at 5 or 6 on the 0-to-6 "Libert" variable (N=46):

0.53 on abSINGLE

0.42 on maleVETO

I wouldn't suggest interpreting these results to mean that libertarians are on net consistent about women's reproductive autonomy and men's reproductive autonomy or, for that matter, that libertarians favor women's reproductive autonomy over men's. But I think that the analyses illustrate the flaw in making inferences about a group based on a linear regression involving people who aren't in that group.

The Stata log file has output of my analyses above and additional analyses, but Chalmers et al 2022 had two datasets and multiple measures for key items, so the analyses aren't exhaustive.

Tagged with: , ,

PS: Political Science & Politics recently published Hartnett and Haver 2022 "Unconditional support for Trump's resistance prior to Election Day".

Hartnett and Haver 2022 reported on an experiment conducted in October 2020 in which likely Trump voters were asked to consider the hypothetical of a Biden win in the Electoral College and in the popular vote, with a Biden popular vote percentage point win randomly assigned to be from 1 percentage point through 15 percentage points. These likely Trump voters were then asked whether the Trump campaign should resist or concede.

Data were collected before the election, but Hartnett and Haver 2022 did not report anything about a corresponding experiment involving likely Biden voters. Hartnett and Haver 2022 discussed a Reuters/Ipsos poll that "found that 41% of likely Trump voters would not accept a Biden victory and 16% of all likely Trump voters 'would engage in street protests or even violence' (Kahn 2020)". The Kahn 2020 source indicates that the corresponding percentages for Biden voters for a Trump victory were 43% and 22%, so it didn't seem like there was a good reason to not include a parallel experiment for Biden voters, especially because data on only Trump voters wouldn't permit valid inferences about the characteristics on which Trump voters were distinctive.

---

But text for a somewhat corresponding experiment involving likely Biden voters is hidden in the Hartnett and Haver 2022 codebook under white boxes or something like that. The text of the hidden items can be highlighted, copied, and pasted from the bottom of pages 19 and 20 of the codebook PDF (or more hidden text can be copied, using ctrl+A, then ctrl-C, and then pasted with ctrl-V).

The hidden codebook text indicates that the hartnett_haver block of the survey had a "bidenlose" item that asked likely Biden voters whether, if Biden wins the popular vote by the randomized percentage points and Trump wins the electoral college, the Biden campaign should "Resist the results of the election in any way possible" or "Concede defeat".

There might be an innocent explanation for Hartnett and Haver 2022 not reporting the results for those items, but that innocent explanation hasn't been shared with me yet on Twitter. Maybe Hartnett and Haver 2022 have a manuscript in progress about the "bidenlose" item.

---

NOTES

1. Hartnett and Haver 2022 seems to be the survey that Emily Badger at the New York Times referred to as "another recent survey experiment conducted by Brian Schaffner, Alexandra Haver and Brendan Hartnett at Tufts". The copied-and-pasted codebook text indicates that this was for the "2020 Tufts Class Survey".

2. On page 18 of the Hartnett and Haver 2022 codebook, above the hidden item about socialism, part of the text of the "certain advantages" item is missing, which seems to be a should-be-obvious indication that text has been covered.

3. The codebook seems to be missing pages of the full survey: in the copied-and-pasted text, page numbers jump from "Page 21 of 43" to "Page 24 of 43" to "Page 31 of 43" to "Page 33 of 43". Presumably at least some missing items were for other members of the Tufts class, although I'm not sure what happened to page 32, which seems to be part of the hartnett_haver block that started on page 31 and ended on page 33.

4. The dataset for Hartnett and Haver 2022 includes a popular vote percentage point win from 1 percentage point through 15 percentage points assigned to likely Biden voters, but the dataset has no data on a resist-or-concede outcome or on a follow-up open-ended item.

Tagged with: , , , ,

1.

Politics, Groups, and Identities recently published Cravens 2022 "Christian nationalism: A stained-glass ceiling for LGBT candidates?". The key predictor is a Christian nationalism index that ranges from 0 to 1, with a key result that:

In both cases, a one-point increase in the Christian nationalism index is associated with about a 40 percent decrease in support for both lesbian/gay and transgender candidates in this study.

But the 40 percent estimates are based on Christian nationalism coefficients in models in which Christian nationalism is interacted with partisanship, race, and religion, and I don't think that these coefficients can be interpreted as associations across the sample. The estimates across the sample should be from models in which Christian nationalism is not included in an interaction, of -0.167 for lesbian and gay political candidates and -0.216 for transgender political candidates. So about half of 40 percent.

Check Cravens 2022 Figure 2, which reports results for support for lesbian and gay candidates: eyeballing from the figure, the drop across the range of Christian nationalism is about 14 percent for Whites, about 18 percent for Blacks, about 9 percent for AAPI, and about 15 percent for persons of another race. No matter how you weight these four categories, the weighted average doesn't get close to 40 percent.

---

2.

And I think that the constitutive terms in the interactions are not always correctly described, either. From Cravens 2022:

As the figure shows, Christian nationalism is negatively associated with support for lesbian and gay candidates across all partisan identities in the sample. Christian nationalist Democrats and Independents are more supportive than Christian nationalist Republicans by about 23 and 17 percent, respectively, but the effects of Christian nationalism on support for lesbian and gay candidates are statistically indistinguishable between Republicans and third-party identifiers.

Table 2 coefficients are 0.231 for Democrats and 0.170 for Independents, with Republicans as the omitted category, with these partisan predictors interacted with Christian nationalism. But I don't think that these coefficients indicate the difference between Christian nationalist Democrats/Independents and Christian nationalist Republicans. In Figure 1, Christian nationalist Democrats are at about 0.90 and Christian nationalist Republicans are at about 0.74, which is less than a 0.231 gap.

---

3.

From Cravens 2022:

Christian nationalism is associated with opposition to LGBT candidates even among the most politically supportive groups (i.e., Democrats).

For support for lesbian and gay candidates and support for transgender candidates, the Democrat predictor interacted with Christian nationalism has a p-value less than p=0.05. But that doesn't indicate whether there is sufficient evidence that the slope for Christian nationalism is non-zero among Democrats. In Figure 1, for example, the point estimate for Democrats at the lowest level of Christian nationalism looks to be within the 95% confidence interval for Democrats at the highest level of Christian nationalism.

---

4.

From Cravens 2022:

In other words, a one-point increase in the Christian nationalism index is associated with a 40 percent decrease in support for lesbian and gay candidates. For comparison, an ideologically very progressive respondent is only about four percent more likely to support a lesbian or gay candidate than an ideologically moderate respondent; while, a one-unit increase in church attendance is only associated with a one percent decrease in support for lesbian and gay candidates. Compared to every other measure, Christian nationalism is associated with the largest and most negative change in support for lesbian and gay candidates.

The Christian nationalism index ranges from 0 to 1, so the one-point increase discussed in the passage is the full estimated effect of Christian nationalism. The church attendance predictor runs from 0 to 6, so the one-unit increase in church attendance discussed in the passage is one-sixth the estimated effect of church attendance. The estimated effect of Christian nationalism is still larger than the estimated effect of church attendance when both predictors are put on a 0-to-1 scale, but I don't know of a good reason to compare a one-unit increase on the 0-to-1 Christian nationalism predictor to a one-unit increase on the 0-to-6 church attendance predictor.

The other problem is that the Christian nationalism index combines three five-point items, so it might be a better measure of Christian nationalism than, say, the progressive predictor is a measure of political ideology. This matters because, all else equal, poorer measures of a concept are biased toward zero. Or maybe the ends of the Christian nationalism index represent more distance than the ends of the political ideology measure. Or maybe not. But I think that it's a good idea to discuss these concerns when comparing predictors to each other.

---

5.

Returning to the estimates for Christian nationalism, I'm not even sure that -0.167 for lesbian and gay political candidates and -0.216 for transgender political candidates are good estimates. For one thing, these estimates are extrapolations from linear regression lines, instead of comparisons of observed outcomes at low and high levels of Christian nationalism, so it's not clear whether the linear regression line is correctly estimating the outcome for high levels of Christian nationalism, given that, for each Christian nationalist statement, the majority of the sample falls on the side of the items opposing the statement, so that the estimated effect of Christian nationalism might be more influenced by opponents of Christian nationalism than by supporters of Christian nationalism.

For another thing, I think that the effect of Christian nationalism should be conceptualized as being caused by a change from indifference to Christian nationalism to support for Christian nationalism, which means that including observations from opponents of Christian nationalism might bias the estimated effect of Christian nationalism.

For an analogy, imagine that we are interested in the effect of being a fan of the Beatles. I think that it would be preferable to compare, net of controls, outcomes for fans of the Beatles to outcomes for people indifferent to the Beatles, instead of comparing, net of controls, outcomes for fans of the Beatles to outcomes for people who hate the Beatles. The fan/hate comparison means that the estimated effect of being a fan of the Beatles is *necessarily* the exact same size as the estimated effect of hating the Beatles, but I think that these are different phenomena. Similarly, I think that supporting Christian nationalism is a different phenomenon than opposing Christian nationalism.

---

NOTES

1. Cravens 2022 model 2 regressions in Tables 2 and 3 include controls plus a predictor for Christian nationalism, three partisanship categories plus Republican as the omitted category, three categories of race plus White as the omitted category, and five categories of religion plus Protestant as the omitted category, and interactions of Christian nationalism with the three included partisanship categories, interactions of Christian nationalism with the three included race categories, and interactions of Christian nationalism with the five included religion categories.

It might be tempting to interpret the Christian nationalism coefficient in these regressions as indicating the association of Christian nationalism with the outcome net of controls among the omitted interactions category of White Protestant Republicans, but I don't think that's correct because of the absence of higher-order interactions. Let me discuss a simplified simulation to illustrate this.

The simulation had participants that were either male (male=1) or female (male=0) and participants that were either Republican (gop=1) or Democrat (gop=0). In the simulation, I set the association of a predictor X with the outcome Y to be -1 among female Democrats, to be -3 among male Democrats, to be -6 among female Republicans, and to be -20 among male Republicans. So the association of X with the outcome was negative for all four combinations of gender and partisanship. But the coefficient on X was +2 in a linear regression with predictors only for X, the gender predictor, the partisanship predictor, an interaction of X and the gender predictor, and an interaction of X and the partisanship predictor.

Simulation for the code in Stata and in R.

2. Cravens 2022 indicated about Table 2 that "Model 2 is estimated with three interaction terms". But I'm not sure that's correct, given the interaction coefficients in the table and given that the Figure 1 slopes for Republican, Democrat, Independent, and Something Else are all negative and differ from each other and the Other Christian slope in Figure 3 is positive, which presumably means that there were more than three interaction terms.

3. Appendix C has data that I suspect is incorrectly labeled: 98 percent of atheists agreed or strongly agreed that "The federal government should declare the United States a Christian nation", 94 percent of atheists agreed or strongly agreed that "The federal government should advocate Christian values", and 94 percent of atheists agreed or strongly agreed that "The success of the United States is part of God's plan".

4. I guess that it's not an error per se, but Appendix 2 reports means and standard deviations for nominal variables such as race and party identification, even though these means and standard deviations depend on how the nominal categories are numbered. For example, party identification has a standard deviation of 0.781 when coded from 1 to 4 for Republican, Democrat, Independent, and Other, but the standard deviation would presumably change if the numbers were swapped for Democrat and Republican, and, as far as I can tell, there is no reason to prefer the order of Republican, Democrat, Independent, and Other.

Tagged with: , , , , ,

My new publication is a technical comment on the Schneider and Gonzalez 2021 article "Racial resentment predicts eugenics support more robustly than genetic attributions".

The experience with the journal Personality and Individual Differences was great. The journal has a correspondence section that publishes technical comments and other types of correspondence, which seems like a great way to publicly discuss research and to hopefully improve research. The authors of the article that I commented on were also great.

---

My comment highlighted a few things about the article, and I think that two of the comments are particularly generalizable. One comment, which I discussed in prior blog posts [1, 2], concerns the practice of comparing the predictive power of factors that are not or might not be equally well measured. I don't think that is a good idea, because measurement error can bias estimates.

The other comment, which I discussed in prior blog posts [1, 2], concerns analyses that model an association as constant. I think that it is more informative to not model key associations as constant, and Figure 1 of the comment illustrates an example of how this can provide useful information.

There is more in the comment. Here is a 50-day share link for the comment.

Tagged with: , ,

This plot reports disaggregated results from the American National Election Studies 2020 Time Series Study pre-election survey item:

On another topic: How much do you feel it is justified for people to use violence to pursue their political goals in this country?

Not shown is that 83% of White Democrats and 92% of White Republicans selected "Not at all" for this item.

Regression output controlling for party identification, gender, and race is in the Stata output file, along with uncertainty estimates for the plot percentages.

---

NOTES

1. Data source: American National Election Studies. 2021. ANES 2020 Time Series Study Preliminary Release: Pre-Election Data [dataset and documentation]. February 11, 2021 version. www.electionstudies.org.

2. Stata code for the analysis and R code for the plot. Dataset for the R plot.

Tagged with: , , ,

The ANES (American National Election Studies) has released the pre- and post-election questionnaires for its 2020 Time Series Study. I thought that it would be useful or at least interesting to review the survey for political bias. I think that the survey is remarkably well done on net, but I do think that ANES 2020 contains unnecessary political bias.

---

1

ANES 2020 has two gender resentment items on the pre-election survey and two modern sexism items on the post-election survey. These four items are phrased to measure negative attitudes about women, but ANES 2020 has no parallels to these four items regarding negative attitudes about men.

Even if researchers cared about only sexism against women, parallel measures of attitudes about men would still be necessary. Evidence indicates and theory suggests that participants sexist against men would cluster at the low end of a measure of sexism against women, so that sexism against women can't properly be estimated as the change from low level to high level of these measures.

This lack of parallel items about men will plausibly produce a political bias in research that uses these four items as measures of sexism, because, while a higher percentage of Republicans than of Democrats is biased against women, a higher percentage of Democrats than of Republicans is biased against men (evidence about partisanship is in in-progress research, but check here about patterns in the 2016 presidential vote).

ANES 2020 has a feeling thermometer for several racial groups, so hopefully future ANES surveys include feeling thermometers about men and women.

---

2

Another type of political bias involves inclusion of response options so that the item can detect only errors more common on the political right. Consider this post-election item labeled "misinfo":

1. Russia tried to interfere in the 2016 presidential election

2. Russia did not try to interfere in the 2016 presidential election

So the large percentage of Hillary Clinton voters who reported the belief that Russia tampered with vote tallies to help Donald Trump don't get coded as misinformed on this misinformation item about Russian interference. The only error that the item can detect is underestimating Russian interference.

Another "misinfo" example:

Which of these two statements do you think is most likely to be true?

1. World temperatures have risen on average over the last 100 years.

2. World temperatures have not risen on average over the last 100 years.

The item permits climate change "deniers" to be coded as misinformed, but does not permit coding as misinformed "alarmists" who drastically overestimate how much the climate has changed over the past 100 years.

Yet another "misinfo" example:

1. There is clear scientific evidence that the anti-malarial drug hydroxychloroquine is a safe and effective treatment for COVID-19.

2. There is not clear scientific evidence that the anti-malarial drug hydroxychloroquine is a safe and effective treatment for COVID-19.

In April 2020, the FDA indicated that "Hydroxychloroquine and chloroquine...have not been shown to be safe and effective for treating or preventing COVID-19", so the "deniers" who think that there is zero evidence available to support HCQ as a covid-19 treatment will presumably not be coded as "misinformed".

One more example (not labeled "misinfo"), from the pre-election survey:

During the past few months, would you say that most of the actions taken by protestors to get the things they want have been violent, or have most of these actions by protesters been peaceful, or have these actions been equally violent and peaceful?

[If the response is "mostly violent" or "mostly peaceful":]

Have the actions of protestors been a lot more or only a little more [violent/peaceful]?

I think that this item might refer to the well-publicized finding that "about 93% of racial justice protests in the US have been peaceful", so that the correct response combination is "mostly peaceful"/"a lot more peaceful" and, thus, the only error that the item permits is overestimating how violent the protests were.

For the above items, I think that the response options disfavor the political right, because I expect that a higher percentage of persons on the political right than the political left will deny Russian interference in the 2016 presidential election, deny climate change, overestimate the evidence for HCQ as a covid-19 treatment, and overestimate how violent recent pre-election protests were.

But I also think that persons on the political left will be more likely than persons on the political right to make the types of errors that the items do not permit to be measured, such as overestimating climate change over the past 100 years.

Other items marked "misinfo" involved vaccines causing autism, covid-19 being developed intentionally in a lab, and whether the Obama administration or the Trump administration deported more unauthorized immigrants during its first three years.

I didn't see an ANES 2020 item about whether the Obama administration or the Trump administration built the temporary holding enclosures ("cages") for migrant children, which I think would be similar to the deportations item, in that people not paying close attention to the news might get the item incorrect.

Maybe a convincing case could be made that ANES 2020 contains an equivalent number of items with limited response options disfavoring the political left as disfavoring the political right, but I don't think that it matters whether political bias in individual items cancels out, because any political bias in individual items is worth eliminating, if possible.

---

3

ANES 2020 has an item that I think alludes to President's Trump's phone call with the Ukrainian president. Here is a key passage from the transcript of the call:

The other thing, There's a lot of talk about Biden's son, that Biden stopped the prosecution and a lot of people want to find out about that so whatever you can do with the Attorney General would be great. Biden went around bragging that he stopped the prosecution so if you can look into it...It sounds horrible to me.

Here is an ANES 2020 item:

As far as you know, did President Trump ask the Ukrainian president to investigate President Trump's political rivals, did he not ask for an investigation, or are you not sure?

I'm presuming that the intent of the item is that a correct response is that Trump did ask for such an investigation. But, if this item refers to only Trump asking the Ukrainian president to look into a specific thing that Joe Biden did, it's inaccurate to phrase the item as if Trump asked the Ukrainian president to investigate Trump's political rivals *in general*, which is what the plural "rivals" indicates.

---

4

I think that the best available evidence indicates that immigrants do not increase the crime rate in the United States (pre-2020 citation) and that illegal immigration reduces the crime rate in the United States (pre-2020 citation). Here is an "agree strongly" to "disagree strongly" item from ANES 2020:

Immigrants increase crime rates in the United States.

Another ANES 2020 item:

Does illegal immigration increase, decrease, or have no effect on the crime rate in the U.S.?

I think that the correct responses to these items are the responses that a stereotypical liberal would be more likely to *want* to be true, compared to a stereotypical Trump supporter.

But I don't think that the U.S. violent crime statistics by race reflect the patterns that a stereotypical liberal would be more likely to want to be true, compared to a stereotypical Trump supporter.

Perhaps coincidentally, instead of an item about racial differences in violent crime rates for which responses could be correctly described as consistent or inconsistent with available mainstream research, ANES 2020 has stereotype items about how "violent" different racial groups are in general, which I think survey researchers will be much less likely to perceive to be addressed in mainstream research and will instead use to measure racism.

---

The above examples of what I think are political biases are relatively minor in comparison to the value that ANES 2020 looks like it will provide. For what it's worth, I think that the ANES is preferable to the CCES Common Content.

Tagged with: , , , ,

This post discusses a commonly used "blatant" measure of dehumanization. Let me begin by proposing two blatant measures of dehumanization:

1. Yes or No?: Do you think that members of Group X are fully human?

2. On a scale in which 0 is not at all human and 10 is fully human, where would you rate members of Group X?

I would interpret a "No" response for the first measure and a response of any number lower than 10 for the second measure as dehumanization of members of Group X. If there are no reasonable alternate interpretation for these responses, then these are face-valid unambiguous measures of blatant dehumanization.

---

But neither above measure is the commonly used social science measure of blatant dehumanization. Instead, the the commonly used "measure of blatant dehumanization" (from Kteily et al. 2015), referred to as the Ascent measure, is below:

And here is how Kteily et al.'s 2015 described the ends of the tool (emphasis omitted):

Responses on the continuous slider were converted to a rating from 0 (least "evolved") to 100 (most "evolved")...

Note that participants are instructed to rate how "evolved" the participant considers the average member of a group to be and that these ratings are placed on a scale from "least evolved" to "most evolved", but these ratings are then interpreted as participant perceptions about the humanness of the group. This doesn't seem like a measure of blatant dehumanization if participants aren't asked to indicate their perceptions of how human the average member of a group is.

The Ascent measure is a blatant measure of dehumanization only if "human" and "evolved" are identical concepts, but these aren't identical concepts. It's possible to simultaneously believe that Bronze Age humans are fully human and that Bronze Age humans are less evolved than humans today. Moreover, I think that the fourth figure in the Ascent image is a Cro-Magnon that is classified by scientists as human, and Kteily et al. seem to agree:

...the image is used colloquially to highlight a salient distinction between early human ancestors and modern humans; that is, the full realization of cognitive ability and cultural expression

The perceived humanness of the fourth figure matters for understanding responses to the Ascent measure because much of the variation in responses occurs between the fourth figure and fifth figure (for example, see Table 1 of Kteily et al. 2015 and Note 1 below).

There is an important distinction between participants dehumanizing a group and participants rating one group lower than another group on a measure that participants interpret as indicating something other than "humanness", such as the degree of "realization of cognitive ability and cultural expression", especially because I don't think that humans need to have "the full realization of cognitive ability and cultural expression" in order to be fully human.

---

NOTES

1. The Jardina and Piston TESS study conducted in 2015 and 2016 with only non-Hispanic White participants had a Ascent measure in which 66% and 77% of unweighted responses for the respective targets of Blacks and Whites were in the 91-to-100 range.

2. I made some of the above points in 2015 in the ANES Online Commons. Lee Jussim raised issues discussed above in 2018, and I didn't find anything earlier.

3. More Twitter discussion of the Ascent measure: here with no reply, here with no reply, here with a reply, here with a reply.

Tagged with: