Below are leftover comments on publications that I read in 2021.

---

ONO AND ZILIS 2021

Politics, Groups, and Identities published Ono and Zilis 2021, "Do Americans perceive diverse judges as inherently biased?". Ono and Zilis 2021 indicated that "We test whether Americans perceive diverse judges as inherently biased with a list experiment". The statements to test whether Americans perceive diverse judges to be "inherently biased" were:

When a court case concerns issues like #metoo, some women judges might give biased rulings.

When a court case concerns issues like immigration, some Hispanic judges might give biased rulings.

Ono and Zilis 2021 indicated that "...by endorsing that idea, without evidence, that 'some' members of a group are inclined to behave in an undesirable way, respondents are engaging in stereotyping" (pp. 3-4).

But statements about whether *some* Hispanic judges and *some* women judges *might* be biased can't measure stereotypes or the belief that Hispanic judges or women judges are *inherently* biased. For example, a belief that *some* women *might* commit violence doesn't require the belief that women are inherently violent and doesn't even require the belief that women are on average more violent than men are.

---

Ono and Zilis 2021 claimed that "Hispanics do not believe that Hispanic judges are biased" (p. 4, emphasis in the original), but, among Hispanic respondents, the 95% confidence interval for agreement with the claim that Hispanic judges might be biased in cases involving issues like immigration did not cross zero in the multivariate analyses in Figure 1.

For Table 2 analyses without controls, the corresponding point estimate indicated that 25 percent of Hispanics agreed with the claim about Hispanic judges, but the ratio of the relevant coefficient to standard error was 0.25/0.15, which is about 1.67, depending on how the 0.25 and 0.15 were rounded. The corresponding p-value isn't less than p=0.05, but that doesn't support the conclusion that the percentage of Hispanics that agreed with the statement is zero.

---

BERRY ET AL 2021

Politics, Groups, and Identities published Berry et al 2021,"White identity politics: linked fate and political participation". Berry et al 2021 claimed to have found "notable partisan differences in the relationship between racial linked fate and electoral participation for White Americans". But this claim is based on differences in the presence of statistical significance between estimates for White Republicans and estimates for White Democrats ("Linked fate is significantly and consistently associated with increased electoral participation for Republicans, but not Democrats", p. 528), instead of being based on statistical tests of whether estimates for White Republicans differ from estimates for White Democrats.

The estimates in the Berry et al 2021 appendix that I highlighted in yellow appear to be incorrect, in terms of plausibility and based on the positive estimate in the corresponding regression output.

---

ARCHER AND CLIFFORD FORTHCOMING

In "Improving the Measurement of Hostile Sexism" (reportedly forthcoming at Public Opinion Quarterly), Archer and Clifford proposed a modified version of the hostile sexism scale that is item specific. For example, instead of measuring responses about the statement "Women exaggerate problems they have at work", the corresponding item-specific item measures responses to the question of "How often do women exaggerate problems they have at work?". Thus, to get the lowest score on the hostile sexism scale, instead of merely strongly disagreeing that women exaggerate problems they have at work, respondents must report the belief that women *never* exaggerate problems they have at work.

---

Archer and Clifford indicated that responses to some of their revised items are measured on a bipolar scale. For example, respondents can indicate that women are offended "much too often", "a bit too often", "about the right amount", "not quite often enough", or "not nearly often enough". So to get the lowest hostile sexism score, respondents need to indicate that women are wrong about how often they are offended, by not being offended enough.

Scott Clifford, co-author of the Archer and Clifford article, engaged me in a discussion about the item specific scale (archived here). Scott suggested that the low end of the scale is more feminist, but dropped out of the conversation after I asked how much of an OLS coefficient for the proposed item-specific hostile sexism scale is due to hostile sexism and how much is due to feminism.

The portion of the hostile sexism measure that is sexism seems like something that should have been addressed in peer review, if the purpose of a hostile sexism scale is to estimate the effect of sexism and not to merely estimate the effect of moving from highly positive attitudes about women to highly negative attitudes about women.

---

VIDAL ET AL 2021

Social Science Quarterly published Vidal et al,"Identity and the racialized politics of violence in gun regulation policy preferences". Appendix A indicates that, for the American National Election Studies 2016 Time Series Study, responses to the feeling thermometer about Black Lives Matter ranged from 0 to 999, with a standard deviation of 89.34, even though the ANES 2016 feeling thermometer for Black Lives Matter ran from 0 to 100, with 999 reserved for respondents who indicate that they don't know what Black Lives Matter is.

---

ARORA AND STOUT 2021

Research & Politics published Arora and Stout 2021 "After the ballot box: How explicit racist appeals damage constituents views of their representation in government", which noted that:

The results provide evidence for our hypothesis that exposure to an explicitly racist comment will decrease perceptions of interest representation among Black and liberal White respondents, but not among moderate and conservative Whites.

This is, as far as I can tell, a claim that the effect among Black and liberal White respondents will differ from the effect among moderate and conservative Whites, but Arora and Stout 2021 did not report a test of whether these effects differ, although Arora and Stout 2021 did discuss statistical significance for each of the four groups.

Moreover, Arora and Stout 2021 footnote 4 indicates that:

In the supplemental appendix, we confirm that explicit racial appeals have a unique effect on interest representation and are not tied to other candidate evaluations such as vote choice.

But the estimated effect for interest representation (Table 1) was -0.06 units among liberal White respondents (with a "+" indicator for statistical significance), which is the same reported number as the estimated effect for vote choice (Table A5): -0.06 units among liberal White respondents (with a "+" indicator for statistical significance).

None of the other estimates in Table 1 or Table A5 have an indicator for statistical significance.

---

Arora and Stout 2021 repeatedly labeled as "explicitly racist" the statement that "If he invited me to a public hanging, I'd be on the front row", but it's not clear to me how that statement is explicitly racist. The Data and Methodology section indicates that "Though the comment does not explicitly mention the targeted group...". Moreover, the Conclusion of Arora and Stout 2021 indicates that...

In spite of Cindy Hyde-Smith's racist comments during the 2018 U.S. Senate election which appeared to show support for Mississippi's racist and violent history, she still prevailed in her bid for elected office.

... and "appeared to" isn't language that I would expect from an explicit statement.

---

CHRISTIANI ET AL 2021

The Journal of Race, Ethnicity, and Politics published Christiani et al 2021 "Masks and racial stereotypes in a pandemic: The case for surgical masks". The abstract indicates that:

...We find that non-black respondents perceive a black male model as more threatening and less trustworthy when he is wearing a bandana or a cloth mask than when he is not wearing his face covering—especially those respondents who score above average in racial resentment, a common measure of racial bias. When he is wearing a surgical mask, however, they do not perceive him as more threatening or less trustworthy. Further, it is not that non-black respondents find bandana and cloth masks problematic in general. In fact, the white model in our study is perceived more positively when he is wearing all types of face coverings.

Those are the within-model patterns, but it's interesting to compare ratings of the models in the control, pictured below:

Appendix Table B.1 indicates that, on average, non-Black respondents rated the White model more threatening and more untrustworthy compared to the Black model: on a 0-to-1 scale, among non-Black respondents, the mean ratings of "threatening" were 0.159 for the Black model and 0.371 for the White model, and the mean ratings of "untrustworthy" were 0.128 for the Black model and 0.278 for the White model. These Black/White gaps were about five times the standard errors.

Christiani et al 2021 claimed that this baseline difference does not undermine their results:

Fortunately, the divergent evaluations of our two models without their masks on do not undermine either of the main thrusts of our analyses. First, we can still compare whether subjects perceive the black model differently depending on what type of mask he is wearing...Second, we can still assess whether people resolve the ambiguity associated with seeing a man in a mask based on the race of the wearer.

But I'm not sure that it is true, that "divergent evaluations of our two models without their masks on do not undermine either of the main thrusts of our analyses".

I tweeted a question to one of the Christiani et al 2021 co-authors that included the handles of two other co-authors, asking whether it was plausible that masks increase the perceived threat of persons who look relatively nonthreatening without a mask but decrease the perceived threat of persons who look relatively more threatening without a mask. That phenomenon would explain the racial difference in patterns described in the abstract, given that the White model in the control was perceived to be more threatening than the Black model in the control.

No co-author has yet responded to defend their claim.

---

Below are the mean ratings on the 0-to-1 "threatening" scale for models in the "no mask" control group, among non-Black respondents by high and low racial resentment, based on Tables B.2 and B.3:

Non-Black respondents with high racial resentment
0.331 mean "threatening" rating of the White model
0.376 mean "threatening" rating of the Black model

Non-Black respondents with low racial resentment
0.460 mean "threatening" rating of the White model
0.159 mean "threatening" rating of the Black model

---

VICUÑA AND PÉREZ 2021

Politics, Groups, and Identities published Vicuña and Pérez 2021, "New label, different identity? Three experiments on the uniqueness of Latinx", which claimed that:

Proponents have asserted, with sparse empirical evidence, that Latinx entails greater gender-inclusivity than Latino and Hispanic. Our results suggest this inclusivity is real, as Latinx causes individuals to become more supportive of pro-LGBTQ policies.

The three studies discussed in Vicuña and Pérez 2021 had these prompts, with the bold font in square brackets indicating the differences in treatments across the four conditions:

Using the spaces below, please write down three (3) attributes that make you [a unique person/Latinx/Latino/Hispanic]. These could be physical features, cultural practices, and/or political ideas that you hold [as a member of this group].

If the purpose is to assess whether "Latinx" differs from "Latino" and "Hispanic", I'm not sure of the value of the "a unique person" treatment.

Discussing their first study, Vicuña and Pérez 2021 reported the p-value for the effect of the "Latinx" treatment relative to the "unique person" treatment (p<.252) and reported the p-values for the effect of the "Latinx" treatment relative to the "Latino" treatment (p<.046) and the "Hispanic" treatment (p<.119). Vicuña and Pérez 2021 reported all three corresponding p-values when discussing their second study and their third study.

But, discussing their meta-analysis of the three studies, Vicuña and Pérez 2021 reported one p-value, which is presumably for the effect of the "Latinx" treatment relative to the "unique person" treatment.

I tweeted a request Dec 20 to the authors to post their data, but I haven't received a reply yet.

---

KIM AND PATTERSON JR. 2021

Political Science & Politics published Kim and Patterson Jr. 2021, "The Pandemic and Gender Inequality in Academia", which reported on tweets of tenure-track political scientists in the United States.

Kim and Patterson Jr. 2021 Figure 2 indicates that, in February 2020, the percentage of work-related tweets was about 11 percent for men and 11 percent for women, and that, shortly after Trump declared a national emergency, these percentages had dropped to about 8 percent and 7 percent respectively. Table 2 reports difference-in-difference results indicating that the pandemic-related decrease in the percentage of work-related tweets was 1.355 percentage points larger for women than for men.

That seems like a relatively small gender inequality in size and importance, and I'm not sure that this gender inequality in percentage of work-related tweets offsets the advantage of having the 31.5k follower @womenalsoknow account tweet about one's research.

---

The abstract of Kim and Patterson Jr. 2021 refers to "tweets from approximately 3,000 political scientists". Table B1 in Appendix B has sample size of 2,912, with a larger number of women than men at the rank of assistant professor, at the rank of associate professor, and at the rank of full professor. The APSA dashboard indicates that women were 37% of members of the American Political Science Association and that 79.5% of APSA members are in the United States, so I think that Table B1 suggests that a higher percentage of female political scientists might be on Twitter than male political scientists.

Oddly, though, when discussing the representatives of this sample, Kim and Patterson Jr. 2021 indicated that (p. 3):

Yet, relevant to our design, we found no evidence that female academics are less likely to use Twitter than male colleagues conditional on academic rank.

That's true about not being *less* likely, but my analysis of the data for Kim and Patterson Jr. 2021 Table 1 indicated that, controlling for academic rank, about 5 percent more female political scientists from top 50 departments were on Twitter, compared to male political scientists from top 50 departments.

Table 1 of Kim and Patterson Jr. 2021 is limited to the 1,747 tenure-track political scientists in the United States from top 50 departments. I'm not sure why Kim and Patterson Jr. 2021 didn't use the full N=2,912 sample for the Table 1 analysis.

---

My analysis indicated that the female/male gaps in the sample were as follows: 2.3 percentage points (p=0.655) among assistant professors, 4.5 percentage points (p=0.341) among associate professors, and 6.7 percentage points (p=0.066) among full professors, with an overall 5 percentage point male/female gap (p=0.048) conditional on academic rank.

---

Kim and Patterson Jr. 2021 suggest a difference in the effect by rank:

Disaggregating these results by academic rank reveals an effect most pronounced among assistants, with significant—albeit smaller—effects for associates. There is no differential effect on work-from-home at the rank of full professor, which is consistent with our hypothesis that these gaps are driven by the increased obligations placed on women who are parenting young children.

But I don't see a test for whether the coefficients differ from each other. For example, in Table 2 for work-related tweets, the "Female * Pandemic" coefficient is -1.188 for associate professors and is -0.891 for full professors, for a difference of 0.297, relative to the respective standard errors of 0.579 and 0.630.

---

Table 1 of Kim and Patterson Jr. 2021 reported a regression predicting whether a political scientist in a top 50 department was a Twitter user, and the p-values are above p=0.05 for all coefficients for "female" and for all interactions involving "female". That might be interpreted as a lack of evidence for a gender difference in Twitter use among these political scientists, but the interaction terms don't permit a clear inference about an overall gender difference.

For example, associate professor is the omitted category of rank in the regression, so the 0.045 non-statistically significant "female" coefficient indicates only that female associate professor political scientists from top 50 departments were 4.5 percentage points more likely to be a Twitter user than male associate professor political scientists from top 50 departments.

And the non-statistically significant "Female X Assistant" coefficient doesn't indicate whether female assistant professors differ from male assistant professors: instead, the non-statistically significant "Female X Assistant" coefficient indicates only that the associate/assistant difference among men in the sample does not differ at p<0.05 from the associate/assistant difference among women in the sample.

Link to the data. R code for my analysis. R output from my analysis.

---

LEFTOVER PLOT

I had the plot below for a draft post that I hadn't yet published:

Item text: "For each of the following groups, how much discrimination is there in the United States today?" [Blacks/Hispanics/Asians/Whites]. Substantive response options were: A great deal, A lot, A moderate amount, A little, and None at all.

Data source: American National Election Studies. 2021. ANES 2020 Time Series Study Preliminary Release: Combined Pre-Election and Post-Election Data [dataset and documentation]. March 24, 2021 version. www.electionstudies.org.

Stata and R code. Dataset for the plot.

Tagged with: , , , ,

The British Journal of Political Science published Jardina and Piston 2021 "The Effects of Dehumanizing Attitudes about Black People on Whites' Voting Decisions".

---

1.

Jardina and Piston 2021 used the "Ascent of Man" measure of dehumanization, which I have discussed previously. Jardina and Piston 2021 subtracted participant responses to the 0-to-100 measure of perceptions of how evolved Blacks are from participant responses to the 0-to-100 measure of perceptions of how evolved Whites are, and placed this difference on a 0-to-1 scale.

Jardina and Piston 2021 placed this 0-to-1 measure of dehumanization into an OLS regression with controls, took the resulting coefficient, such as 0.60 in Table 1 (for which the p-value is less than p=0.001), and halved that coefficient, so that, for the 0.60 coefficient, moving from the neutral point on the dehumanization scale to the highest measured dehumanizing about Blacks relative to Whites accounted for 0.30 points on the outcome variable scale, which for this estimate was a 0-to-100 feeling thermometer rating about Donald Trump placed on a 0-to-1 scale.

However, this research design means that 0.30 points on a 0-to-1 scale is also the corresponding estimate of how much dehumanizing about Whites relative to Blacks affected feeling thermometer ratings about Donald Trump. Jardina and Piston 2021 thus did not permit the estimate of the marginal effect of dehumanizing Blacks to differ from the estimate of the marginal effect of dehumanizing Whites.

I discussed this before (1, 2), but it's often better to not assume a linear association for continuous predictors (citation to Hainmueller et al. 2019).

Figure 3 of Jardina and Piston 2021 censors the estimated effect of dehumanizing Whites, by plotting predicted probabilities of a Trump vote among Whites but restricting the range of dehumanization to run from neutral (0.5 on the dehumanization measure) to most dehumanization about Blacks (1.0 on the measure).

---

2.

Jardina and Piston 2021 claimed that "Finally, our findings serve as a warning about the nature of Whites' racial attitudes in the contemporary United States" (p. 20). But Jardina and Piston 2021 did not report any evidence that Whites' attitudes in this area differ from non-Whites' attitudes in this area. That seems like a relevant question for researchers interested in understanding racial attitudes.

If I'm reading page 9 correctly, Jardina and Piston 2021 reported on original survey data from a 2016 YouGov two-wave panel of 600 non-Hispanic Whites, a 2016 Qualtrics survey of 500 non-Hispanic Whites, a 2016 GfK survey of 2,000 non-Hispanic Whites, and another 2016 YouGov two-wave panel of 600 non-Hispanic Whites.

The funding statement in Jardina and Piston 2021 acknowledges only Duke University and Boston University. That's a lot of internal resources for surveys conducted in a single year, and I don't think that Jardina and Piston 2021 limiting the analysis to Whites can be reasonably attributed to a lack of resources.

The Qualtrics_BJPS.dta dataset at the Dataverse page for Jardina and Piston 2021 has cases for 1,125 Whites, 242 Blacks, 88 Asians, 45 Native Americans, and 173 coded Other, with respective non-Latino cases of 980, 213, 83, 31, and 38. The Dataverse page doesn't have a codebook for that dataset, and the relevant variable names in that dataset aren't clear to me, but I'll plan to post a follow-up here if I get sufficient information to analyze responses from non-White participants.

---

3.

Jardina and Piston 2021 suggested (p. 4) that:

We also suspect that recent trends in the social and natural sciences are legitimizing beliefs about biological differences between racial groups in ways that reinforce a propensity to dehumanize Black people.

This passage did not mention the Jardina and Piston 2015/6 TESS experiment in which participants were assigned to a control condition, or a condition with a reading entitled "Genes May Cause Racial Difference in Heart Disease", or a condition with a reading entitled "Social Conditions May Cause Racial Difference in Heart Disease".

My analysis of data for that experiment found a p<0.01 difference between treatment groups in mean responses to an item about whether there are biological differences between Blacks and Whites, which suggests that the treatment worked. But the treatment didn't produce a detectable effect on key outcomes, according to the description of results on the page for the Jardina and Piston 2015/6 TESS experiment, which indicates that "Experimental conditions are not statistically associated with the distribution of responses to the outcome variables". This null result seems to be relevant for the above quoted suspicion from Jardina and Piston 2021.

---

4.

Jardina and Piston 2021 indicated that "Dehumanization has serious consequences. It places the targets of these attitudes outside of moral consideration, ..." (p. 6). But the Jardina and Piston proposal for the 2015/6 TESS experiment had proposed that some participants be exposed to a treatment that Jardina and Piston hypothesized would increase participant "biological racism", to use a term from their proposal.

Selected passages from the proposal are below:

We hypothesize that the proportion of respondents rating whites as more evolved than blacks is highest in the Race as Genetics Condition, lower in the Control Condition, and lowest in the Race as Social Construction Condition.

...Our study will also inform scholarship on news media communication, demonstrating that ostensibly innocuous messages about race, health, and genetics can have pernicious consequences.

Exposing some participants to a treatment that the researchers hypothesized as having "pernicious consequences" seems like an interesting ethical issue that the proposal didn't discuss.

Moreover, like some other research that uses the Ascent of Man measure of dehumanization, the Jardina and Piston 2015/6 TESS experiment included the statement that "People can vary in how human-like they seem". I wonder which people this statement is meant to refer to. Nothing in the debriefing indicated that this statement was deception.

---

5.

The dataset for the Jardina and Piston 2015/6 TESS experiment includes comments from participants. I thought that comments from participants with IDs 499 and 769 were worth highlighting (the statements were cut off in the dataset):

I disliked this survey as you should ask the same questions about whites. I was not willing to say blacks were not rational but whites are not rational either. But to avoid thinking I was prejudice I had to give a higher rating. All humans a

Black people are not less evolved, 'less evolved' is a meaningless term as evolution is a constant process and the only difference is what particular adaptations a group has. I don't like to claim certainty about things of which I am unsure, a

---

NOTES

1. The Table 3 header for Jardina and Piston 2021 indicates that vote choice is the outcome for that table, but the corresponding note indicates that "Higher values of the dependent variable indicate greater warmth toward Trump on the 101-point feeling thermometer". Moreover, Figure 3 of Jardina and Piston 2021 plots predicted probabilities of a vote for Trump, but the figure note indicates that the figure was "based on Table 4, Model 4", which is instead about warmth toward Obama.

2. Jardina and Piston 2021 reported results for participant responses about the "dehumanizing characteristics" of "savage", "barbaric", and "lacking self-restraint, like animals", so I checked how responses to the "violent" stereotype item associated with two-party presidential vote choice in data from the ANES 2020 Time Series Study.

Results indicated that, compared to White participants who rated Whites as being as violent on average as Blacks, White participants who rated Blacks as being more violent on average than Whites were more likely to vote for Donald Trump net of controls (p<0.05). But result also indicated that, compared to White participants who rated Whites as being as violent on average as Blacks, White participants who rated Whites as being more violent on average than Blacks were less likely to vote for Donald Trump net of controls (p<0.05). See lines 105 through 107 in the output.

3. Jardina and Piston 2021 reported that, in their 2016b YouGov survey, 42% of Whites rated Whites as more evolved than Blacks (pp. 9-10). For a comparison, the Martherus et al. 2019 study about Democrat and Republican dehumanization of outparty members reported dehumanization percentages of "nearly 77%" (2018 SSI study) and "just over 80%" (2018 CCES study).

4. Data sources:

American National Election Studies. 2021. ANES 2020 Time Series Study Preliminary Release: Combined Pre-Election and Post-Election Data [dataset and documentation]. March 24, 2021 version. www.electionstudies.org.

Ashley Jardina and Spencer Piston. 2015/6. Data for: "Explaining the Prevalence of White Biological Racism against Blacks". Time-sharing Experiments for the Social Sciences. https://www.tessexperiments.org/study/pistonBR61

Ashley Jardina and Spencer Piston. 2021. Replication Data for: The Effects of Dehumanizing Attitudes about Black People on Whites' Voting Decisions, https://doi.org/10.7910/DVN/A3XIFC, Harvard Dataverse, V1, UNF:6:nNg371BCnGaWtRyNdu0Lvg== [fileUNF]
Tagged with: , , ,

The recent Mason et al. Monkey Cage post claimed that:

We found about 30 percent of Americans surveyed in 2011 reported feelings of animosity towards African Americans, Hispanics, Muslims, and the LGBTQ community. These individuals make up our MAGA faction.

But much less than 30% of Americans reported animus toward all four of these groups. In unweighted analyses using the 2011 VOTER data, the percentage that rated the group under 50 on a 0-to-100 feeling thermometer was 13% for Blacks, 17% for Latinos, 46% for Muslims, and 26% for gays and lesbians. Only about 3% rated all four groups under 50.

So how did Mason et al. get 30%? Based on the Mason et al. figure note (and my check in Stata), 30% is percentage of average ratings across all four groups that is under 50.

But I don't think that the average across variables should be used to describe responses to individual variables. I think that it would be misleading, for instance, to describe the respondent who rated Blacks at 75 and Muslims at 0 as reporting animosity toward Blacks and Muslims, especially given that the respondent rated Whites at 71 and Christians at 0.

---

Mason et al. write that:

Our research does suggest that, as long as this MAGA faction exists, politicians may be tempted to appeal to it, hoping to repeat Trump's success. In fact, using inflammatory and divisive appeals would be a rational campaign strategy, since they can animate independent voters who dislike these groups.

It seems reasonable to be concerned about politicians appealing to intolerant people, but I'm not sure that it's reasonable to limit this concern about intolerance to the MAGA faction.

Below are data from ANES 2020 Time Series Survey, of the percentage of the U.S. population that rated a set of target groups under 50 on a 0-to-100 feeling thermometer, disaggregated by partisanship:

So the coalitions that reported cold ratings about Hispanics, Blacks, gay men and lesbians, Muslims, transgender people, and illegal immigrants are disproportionately Republican (compared to Democratic), and the coalitions that reported cold ratings about rural Americans, Whites, Christians, and Christian fundamentalists are disproportionately Democratic (compared to Republican).

Democrats were more common in the data than Republicans were, so the plot above doesn't permit direct comparison of the blue bars to the red bars to assess relative frequency of cold ratings by party. To permit that assessment, the plot below indicates the percentage of Democrats and the percentage of Republicans that reported a cold rating of the indicated target group:

---

Mason et al. end their Monkey Cage post with:

But identifying this MAGA faction as both separate from and related to partisan politics can help us better understand the real conflict. When a small, intolerant faction of citizens wields disproportionate influence over nationwide governance, democracy erodes. Avoiding discussion about this group only protects its power.

But the Mason et al. Monkey Cage post names only one intolerant group -- the MAGA faction -- and avoids naming the group that is intolerant of Whites and Christians, which, by the passage above, presumably protects the power of that other intolerant group.

---

NOTES

1. Data citation: American National Election Studies. 2021. ANES 2020 Time Series Study Full Release [dataset and documentation]. July 19, 2021 version. www.electionstudies.org.

2. Link to the Mason et al. 2021 APSR letter.

3. Directions for the 2011 VOTER survey thermometer items directed respondents to "Click on the thermometer to give a rating". If this means that respondents did something like moving a widget instead of inputting a numeric rating, then I think that that might overestimate cold ratings, if some respondents try to rate at 50, instead move to a bit under 50, and then figure that 49 or so is close enough.

But this might not be a large bias: for example, the thermometer about Blacks respectively had 27, 44, 745, 376, and 212 responses for ratings of 48 through 52.

4. Draft plots:

5. Stata code for the analyses, plus: tab pid3_2011 if ft_white_2011==71 & ft_christian_2011==0 & ft_black_2011==75 & ft_muslim_2011==0

6. R data and code for the "three color" barplot.

7. R data and code for the "back-to-back" barplot.

8. R data and code for the "full sample" barplot.

9. R data and code for the "two panel" barplot.

Tagged with: , ,

I posted to OSF data, code, and a report for my unpublished "Public perceptions of human evolution as explanations for racial group differences" [sic] project that was from a survey that YouGov ran for me in 2017, using funds from Illinois State University New Faculty Start-up Support and the Illinois State University College of Arts and Sciences. The report describes results from preregistered analyses, but below I'll highlight key results.

---

The key item asked participants whether God's design and/or evolution, or neither, helped cause a particular racial difference:

Some racial groups have [...] compared to other racial groups. Select ALL of the reasons below that you think help cause this difference:
□ Differences in how God designed these racial groups
□ Genetic differences that evolved between these racial groups
○ None of the above

Participants were randomly assigned to receive one racial difference in the part of the item marked [...] above. Below are the racial differences asked about, along with the percentage assigned to that item who selected only the "evolved" response option:

70% a greater risk for certain diseases
55% darker skin on average
54% more Olympic-level runners
49% different skull shapes on average
26% higher violent crime rates on average
24% higher math test scores on average
21% lower math test scores on average
18% lower violent crime rates on average

---

Another item on the survey (discussed at this post) asked about evolution. The reports that I posted for these items removed all or a lot of the discussion and citation of literature from the manuscripts that I had submitted to journals but were rejected, in case I can use that material for a later manuscript.

Tagged with: , , , ,

Social Forces published Wetts and Willer 2018 "Privilege on the Precipice: Perceived Racial Status Threats Lead White Americans to Oppose Welfare Programs", which indicated that:

Descriptive statistics suggest that whites' racial resentment rose beginning in 2008 and continued rising in 2012 (figure 2)...This pattern is consistent with our reasoning that 2008 marked the beginning of a period of increased racial status threat among white Americans that prompted greater resentment of minorities.

Wetts and Willer 2018 had analyzed data from the American National Election Studies, so I was curious about the extent to which the rise in Whites' racial resentment might be due to differences in survey mode, given evidence from the Abrajano and Alvarez 2019 study of ANES data that:

We find that respondents tend to underreport their racial animosity in interview-administered versus online surveys.

---

I didn't find a way to reproduce the exact results from Wetts and Willer 2018 Supplementary Table 1 for the rise in Whites' racial resentment, but, like in that table, my analysis controlled for gender, age, education, employment status, marital status, class identification, income, and political ideology.

Using the ANES Time Series Cumulative Data File with weights for the full samples, my analysis detected p<0.05 evidence of a rise in Whites' mean racial resentment from 2008 to 2012, which matches Wetts and Willer 2018; this holds net of controls and without controls. But the p-values were around p=0.22 for the change from 2004 to 2008.

But using weights for the full samples compares respondents in 2004 and in 2008 who were only in the face-to-face mode, with respondents in 2012, some of whom were in the face-to-face mode and some of whom were in the internet mode.

Using weights only for the face-to-face mode, the p-value was not under p=0.25 for the change in Whites' mean racial resentment from 2004 to 2008 or from 2008 to 2012, net of controls and without controls. The point estimates for the 2008-to-2012 change were negative, indicating, if anything, a drop in Whites' mean racial resentment.

---

NOTES

1. For what it's worth, the weighted analyses indicated that Whites' mean racial resentment wasn't higher in 2008, 2012, or 2016, relative to 2004, and there was evidence at p<0.05 that Whites' mean racial resentment was lower in 2016 than in 2004.

2. Abrajano and Alvarez 2019 discussing their Table 2 results for feeling thermometers ratings about groups indicated that (p. 263):

It is also worth noting that the magnitude of survey mode effects is greater than those of political ideology and gender, and nearly the same as partisanship.

I was a bit skeptical that the difference in ratings about groups such as Blacks and illegal immigrants would be larger by survey mode than by political ideology, so I checked Table 2.

The feeling thermometer that Abrajano and Alvarez 2019 discussed immediately before the sentence quoted above involved illegal immigrants; that analysis had coefficients of -2.610 for internet survey mode, but coefficients of 6.613 for Liberal, -1.709 for Conservative, 6.405 for Democrat, and -8.247 for Republican. So the liberal/conservative difference is 8.322 and the Democrat/Republican difference is 14.652, compared to the survey mode difference is -2.610.

3. Dataset: American National Election Studies. 2021. ANES Time Series Cumulative Data File [dataset and documentation]. November 18, 2021 version. www.electionstudies.org

4. Data, code, and output for my analysis.

Tagged with: , , , , ,

Criminology recently published Schutten et al 2021 "Are guns the new dog whistle? Gun control, racial resentment, and vote choice".

---

I'll focus on experimental results from Schutten et al 2021 Figure 1. Estimates for respondents low in racial resentment indicated a higher probability of voting for a hypothetical candidate:

[1] when the candidate was described as Democrat, compared to when the candidate was described as a Republican,

[2] when the candidate was described as supporting gun control, compared to when the candidate was described as having a policy stance on a different issue, and

[3] when the candidate was described as not being funded by the NRA, compared to when the candidate was described as being funded by the NRA.

Patterns were reversed for respondents high in racial resentment. The relevant 95% confidence intervals did not overlap for five of the six patterns, with the exception being for the NRA funding manipulation among respondents high in racial resentment; eyeballing, it doesn't look like the p-value is under p=0.05 for that estimated difference.

---

For the estimate that participants low in racial resentment were less likely to vote for a hypothetical candidate described as being funded by the NRA than for a hypothetical candidate described as not being funded by the NRA, Schutten et al 2021 suggested that this might reflect a backlash against of "the use of gun rights rhetoric to court prejudiced voters" (p. 20). But, presuming that the content of the signal provided by the mention of NRA funding is largely or completely racial, the "backlash" pattern is also consistent with a backlash against support of a constitutional right that many participants low in racial resentment might perceive to be disproportionately used by Whites and/or rural Whites.

Schutten et al 2021 conceptualized participants low in racial resentment as "nonracists" (p. 3) and noted that "recent evidence suggests that those who score low on the racial resentment scale 'favor' Blacks (Agadjanian et al., 2021)" (p. 21), but I don't know why the quotation marks around "favor" are necessary, given that there is good reason to characterize a nontrivial percentage of participants low in racial resentment as biased against Whites: for example, my analysis of data from the ANES 2020 Time Series Study indicated that about 40% to 45% of Whites (and about 40% to 45% of the general population) that fell at least one standard deviation under the mean level of racial resentment rated Whites lower on the 0-to-100 feeling thermometers than they rated Blacks, and Hispanics, and Asians/Asian-Americans. (This is not merely respondents rating Whites on average lower than Blacks, Hispanics, and Asians/Asian-Americans, but is rating Whites lower than each of these three groups).

Schutten et al 2021 indicated that (p. 4):

Importantly, dog whistling is not an attempt to generate racial prejudice among the public but to arouse and harness latent resentments already present in many Americans (Mendelberg, 2001).

Presumably, this dog whistling can activate the racial prejudice against Whites that many participants low in racial resentment have been comfortable expressing on feeling thermometers.

---

NOTES

1. Schutten et al 2021 claimed that (p. 8):

If racial resentment is primarily principled conservatism, its effect on support for government spending should not depend on the race of the recipient.

But if racial resentment were, say, 70% principled ideology and 30% racial prejudice, racial resentment should still associate with racial discrimination due to the 30%.

And I think that it's worth considering whether racial resentment should also be described as being influenced by progressive ideology. If principled conservatism can cause participants to oppose special favors for Blacks, presumably a principled progressivism can cause participants to support special favors for Blacks. If so, it seems reasonable to also conceptualize racial resentment as the merging of principled progressivism and prejudice against Whites, given that both could presumably cause support for special favors for Blacks.

2. Schutten et al 2021 claimed that (p. 16):

The main concern about racial resentment is that it is a problematic measure of racial prejudice among conservatives but a suitable measure among nonconservatives (Feldman & Huddy, 2005).

But I think that major concerns about racial resentment are present even among nonconservatives. As I indicated in a prior blog post, I think that the best case against racial resentment has two parts. First, racial resentment captures racial attitudes in a way that is difficult if not impossible to disentangle from nonracial attitudes; that concern remains among nonconservatives, such as the possibility that a nonconservative would oppose special favors for Blacks because of a nonracial opposition to special favors.

Second, many persons at low racial resentment have a bias against Whites, and limiting the sample to nonconservatives if anything makes it more likely that the estimated effect of racial resentment is capturing the effect of bias against Whites.

3. Figure 1 would have provided stronger evidence about p<0.05 differences between estimates if plotting 83.4% confidence intervals.

4. [I deleted this comment because Justin Pickett (co-author on Schutten et al 2021) noted in review of a draft version of this post that this comment suggested an analysis that was reported in Schutten et al 2021, that an analysis be limited to participants low in racial resentment and an analysis be limited to participants high in racial resentment. Thanks to Justin for catching that.]

5. Data source for my analysis: American National Election Studies. 2021. ANES 2020 Time Series Study Preliminary Release: Combined Pre-Election and Post-Election Data [dataset and documentation]. July 19, 2021 version. www.electionstudies.org.

Tagged with: , , , ,

My new publication is a technical comment on the Schneider and Gonzalez 2021 article "Racial resentment predicts eugenics support more robustly than genetic attributions".

The experience with the journal Personality and Individual Differences was great. The journal has a correspondence section that publishes technical comments and other types of correspondence, which seems like a great way to publicly discuss research and to hopefully improve research. The authors of the article that I commented on were also great.

---

My comment highlighted a few things about the article, and I think that two of the comments are particularly generalizable. One comment, which I discussed in prior blog posts [1, 2], concerns the practice of comparing the predictive power of factors that are not or might not be equally well measured. I don't think that is a good idea, because measurement error can bias estimates.

The other comment, which I discussed in prior blog posts [1, 2], concerns analyses that model an association as constant. I think that it is more informative to not model key associations as constant, and Figure 1 of the comment illustrates an example of how this can provide useful information.

There is more in the comment. Here is a 50-day share link for the comment.

Tagged with: , ,