1.

In 2003, Melissa V. Harris-Lacewell wrote that (p. 222):

The defining works of White racial attitudes fail to grapple with the complexities of African American political thought and life. In these studies, Black people are a static object about which White people form opinions.

Researchers still sometimes make it difficult to analyze data from Black participants or don't report interesting data on Black participants. Helping to address this, Darren W. Davis and David C. Wilson have a new book Racial Resentment in the Political Mind (RRPM), with an entire chapter on African Americans' resentment toward Whites.

RRPM is a contribution to research on Black political attitudes, and its discussion of measurement of Whites' resentment toward Blacks is nice, especially for people who don't realize that standard measures of "racial resentment" aren't good measures of resentment. But let me discuss some elements of the book that I consider flawed.

---

2.

RRPM draws, at a high level, a parallel between Whites' resentment toward Blacks and Blacks' resentment toward Whites (p. 242):

In essence, the same model of a just world and appraisal of deservingness that guides Whites' racial resentment also guides African Americans' racial resentment.

That seems reasonable, to have the same model for resentment toward Whites and resentment toward Blacks. But RRPM proposes different items for a battery of resentment toward Blacks and for a battery of resentment toward Whites, and I think that different batteries for each type of resentment will undercut comparison of the size of the effects of these two different resentments, because one battery might capture true resentment better than another battery.

Thus, especially for general surveys such as the ANES that presumably can't or won't devote space to batteries measuring resentments tailored to each racial group, it might be better to measure resentment toward various groups with generalizable items such as agreement/disagreement to statements such as "Whites have gotten more than they deserve" and "Blacks have gotten more than they deserve", which hopefully would produce more valid comparisons of the estimated effect of resentments toward different groups, compared to comparison of batteries of different items.

---

3.

RRPM suggests that all resentment batteries not be given to all respondents (p. 241):

A clear outcome of this chapter is that African Americans should not be presented the same classic racial resentment survey items that Whites would answer (and perhaps vice versa)...

And from page 30:

African Americans and Whites have different reasons to be resentful toward each other, and each group requires a unique set of measurement items to capture resentment.

But not giving participants items measuring resentment of their own racial group doesn't seem like a good idea, because a White participant could think that Whites have received more than they deserve on average, and a Black participant could think that Blacks have received more than they deserve on average, so that omitting White resentment of Whites and similar measures could plausibly bias estimates of the effect of resentment, if resentment of one's own racial group influences a participant's attitudes about political phenomena.

---

RRPM discusses asking Blacks to respond to racial resentment items toward Blacks: "No groups other than African Americans seem to be asked questions about self-hate" (p. 249). RRPM elsewhere qualifies this with "rarely": "That is, asking African Americans to answer questions about disaffection toward their own group is a task rarely asked of other groups"  (p. 215).

The ANES 2016 pilot study did ask White participants about White guilt (e.g., "How guilty do you feel about the privileges and benefits you receive as a white American?") without asking any other racial groups about parallel guilt. Moreover, the CCES had (in 2016 and 2018 at least) an agree/disagree item asked of Whites and others that "White people in the U.S. have certain advantages because of the color of their skin", with no equivalent item about color-of-skin advantages for people who are not White.

But even if Black participants disproportionately receive resentment items directed at Blacks, the better way to address this inequality and to understand racial attitudes is to add resentment items directed at other groups.

---

4.

RRPM seems to suggest an asymmetry in that only Whites' resentment is normatively bad (p. 25):

In the end, African Americans' quest for civil rights and social justice is resented by Whites, and Whites' maintenance of their group dominance is resented by African Americans.

Davis and Wilson discussed RRPM in a video on the UC Public Policy Channel, with Davis suggesting that "a broader swath of citizens need to be held accountable for what they believe" (at 6:10) and that "...the important conversation we need to have is not about racists. Okay. We need to understand how ordinary American citizens approach race, approach values that place them in the same bucket as racists. They're not racists, but they support the same thing that racists support" (at 53:37).

But, from what I can tell, the ordinary American citizens in the same bucket as racists don't seem to be, say, people who support hiring preferences for Blacks for normatively good reasons and just happen to have the same policy preferences as people who support hiring preferences for Blacks because of racism against Whites. Instead, my sense is that the racism in question is limited to racism that causes racial inequality: David C. Wilson at 3:24 in the UC video:

And so, even if one is not racist, they can still exacerbate racial injustice and racial inequality by focusing on their values rather than the actual problem and any solutions that might be at bay to try and solve them.

---

Another apparent asymmetry is that RRPM mentions legitimizing racial myths throughout the book (vii, 3, 8, 21, 23, 28, 35, 47, 48, 50, 126, 129, 130, 190, 243, 244, 247, 261, 337, and 342), but legitimizing racial myths are not mentioned in the chapter on African Americans' resentment toward Whites (pp. 214-242). RRPM page 8 figure 1.1 is model of resentment that has an arrow from legitimizing racial myths to resentment, but RRPM doesn't indicate what, if any, legitimizing racial myths inform resentment toward Whites.

Legitimizing myths are conceptualized on page 8 as follows:

Appraisals of deservingness are shaped by legitimizing racial myths, which are widely shared beliefs and stereotypes about African Americans and other minorities that justify their mistreatment and low status. Legitimizing myths are any coherent set of socially accepted attitudes, beliefs, values, and opinions that provide moral and intellectual legitimacy to the unequal distribution of social value (Sidanius, Devereux, and Pratto 1992).

But I don't see why legitimizing myths couldn't add legitimacy to unequal *treatment*. Presumably resentment flows from beliefs about the causes of inequality, so Whites as a/the main/the only cause of Black/White inequality could serve as a belief that legitimizes resentment toward Whites and, consequently, discrimination against Whites.

---

5.

The 1991 National Race and Politics Survey had a survey experiment, asking for agreement/disagreement to the item:

In the past, the Irish, the Italians, the Jews and many other minorities overcame prejudice and worked their
way up.

Version 1: Blacks...
Version 2: New immigrants from Europe...

...should do the same without any special favors?

This experiment reflects the fact that responses to items measuring general phenomena applied to a group might be influenced by the general phenomena and/or the group.

Remarkably, the RRPM measurement of racial schadenfreude (Chapter 7) does not address this ambiguity, with items measuring participant feelings about only President Obama, such as the schadenfreude felt by "Barack Obama's being identified as one of the worst presidents in history". At least RRPM realizes this (p. 206):

Without a more elaborate research design, we cannot really determine whether the schadenfreude experienced by Republicans is due to his race or to some other issue.

---

6.

For an analysis of racial resentment in the political mind, RRPM remarkably doesn't substantively consider Asians, even if only as a target of resentment to help test alternate explanations about the cause of resentment, given that, like Whites, Asians on average have relatively positive outcomes in income and related measures, but do not seem to be blamed for U.S. racial inequality as much as Whites are.

---

NOTES

1. From RRPM (p. 241):

When items designed on one race are automatically applied to another race under the assumption of equal meaning, it creates measurement invariance.

Maybe the intended meaning is something such as "When items designed on one race are automatically applied to another race, it assumes measurement invariance".

2. RRPM Figure 2.1 (p. 68) reports how resentment correlates with feeling thermometer ratings about Blacks and with feeling thermometer ratings about Whites, but not with the more intuitive measure of the *difference* in feeling thermometer ratings about Blacks and about Whites.

Tagged with: , , , ,

I posted earlier about Jardina and Piston 2021 "The Effects of Dehumanizing Attitudes about Black People on Whites' Voting Decisions".

Jardina and Piston 2021 limited the analysis to White respondents, even though the Qualtrics_BJPS dataset at the Dataverse page for Jardina and Piston 2021 contained observations for non-White respondents. The Qualtrics_BJPS dataset had variables such as aofmanpic_1 and aofmanpic_6, and I didn't know which of these variables corresponded to which target groups.

My post indicated a plan to follow up if I got sufficient data to analyze responses from non-White participants. Replication code has now been posted at version 2 of the Dataverse page for Jardina and Piston 2021, so this is that planned post.

---

Version 2 of the Jardina and Piston 2021 Dataverse page has a Qualtrics dataset (Qualtrics_2016_BJPS_raw) that differs from the version 1 Qualtrics dataset (Qualtrics_BJPS): for example, the version 2 Qualtrics dataset doesn't contain data for non-White respondents, doesn't contain respondent ID variables V1 and uid, and doesn't contain variables such as aofmanpic_2.

I ran the Jardina and Piston 2021 "aofman" replication code on the Qualtrics_BJPS dataset to get a variable named "aofmanwb". In the version 2 dataset, this produced the output for the Trump analysis in Table 1 of Jardina and Piston 2021, so this aofmanwb variable is the "Ascent of man" dehumanization measure, coded so that rating Blacks as equally evolved as Whites is 0.5, rating Whites as more evolved than Blacks runs from just above 0.5 to 1, and rating Blacks more evolved than Whites runs from just under 0.5 down to zero.

The version 2 replication code for Jardina and Piston 2021 suggests that aofmanpic_1 is for rating how evolved Blacks are and aofmanpic_4 is for rating how evolved Whites are. So unless these variable names were changed between versions of the dataset, the version 2 replication code should produce the "Ascent of man" dehumanization measure when applied to the version 1 dataset, which is still available at the Jardina and Piston 2021 Dataverse page.

To check, I ran commands such as "reg aofmanwb ib4.ideology if race==1 & latino==2" in both datasets, and got similar but not exact results, with the difference presumably due to the differences between datasets discussed in the notes below.

---

The version 1 Qualtrics dataset didn't contain a variable that I thought was a weight variable, so my analyses below are unweighted.

In the version 1 dataset, the medians of aofmanwb were 0.50 among non-Latino Whites in the sample (N=450), 0.50 among non-Latino Blacks in the sample (N=98), and 0.50 among respondents coded Asian, Native American, or Other (N=125). Respective means were 0.53, 0.48, and 0.51.

Figure 1 of Jardina and Piston 2021 mentions the use of sliders to select responses to the items about how evolved target groups are, and I think that some unequal ratings might be due to respondent imprecision instead of an intent to dehumanize, such as if a respondent intended to select 85 for each group in a pair, but moved the slider to 85 for one group and 84 for the other group, and then figured that this was close enough. So I'll report percentages below with a strict dehumanization definition of anything differing from 0.5 on the 0-to-1 scale as dehumanization, but I'll also report percentages with a tolerance for potential unintentional dehumanization.

---

For the strict coding of dehumanization, I recoded aofmanwb into a variable that had levels for [1] rating Blacks as more evolved than Whites, [2] equal ratings of how evolved Blacks and Whites are, and [3] rating Whites as more evolved than Blacks.

In the version 1 dataset, 13% of non-Latino Whites in the sample rated Blacks more evolved than Whites, with an 83.4% confidence interval of [11%, 16%], and 39% rated Whites more evolved than Blacks [36%, 43%]. 42% of non-Latino Blacks in the sample rated Blacks more evolved than Whites [35%, 49%], and 23% rated Whites more evolved than Blacks [18%, 30%]. 19% of respondents not coded Black or White in the sample rated Blacks more evolved than Whites [15%, 25%], and 38% rated Whites more evolved than Blacks [32%, 45%].

---

For the non-strict coding of dehumanization, I recoded aofmanwb into a variable that had levels that included [1] rating Blacks at least 3 units more evolved than Whites on a 0-to-100 scale, and [5] rating Whites at least 3 units more evolved than Blacks on a 0-to-100 scale.

In the version 1 dataset, 8% of non-Latino Whites in the sample rated Blacks more evolved than Whites [7%, 10%], and 30% rated Whites more evolved than Blacks [27%, 34%]. 34% of non-Latino Blacks in the sample rated Blacks more evolved than Whites [27%, 41%], and 21% rated Whites more evolved than Blacks [16%, 28%]. 13% of respondents not coded Black or White in the sample rated Blacks more evolved than Whites [9%, 18%], and 31% rated Whites more evolved than Blacks [26%, 37%].

---

NOTES

1. Variable labels in the Qualtrics dataset ("male" coded 0 for "Male" and 1 for "Female") and associated replication commands suggest that Jardina and Piston 2021 might have reported results for a "Female" variable coded 1 for male and 0 for female, which would explain why Table 1 Model 1 of Jardina and Piston 2021 indicates that females were predicted to have higher ratings about Trump net of controls at p<0.01 compared to males, even though the statistically significant coefficients for "Female" in the analyses from other datasets in Jardina and Piston 2021 are negative when predicting positive outcomes for Trump.

The "Female" variable in Jardina and Piston 2021 Table 1 Model 1 is right above the statistically significant coefficient and standard error for age, of "0.00" and "0.00". The table note indicates that "All variables are transformed onto a 0 to 1 scale.", but that isn't correct for the age predictor, which ranges from 19 to 86.

2. I produced a plot like Jardina and Piston 2021 Figure 3, but with a range from most dehumanization of Whites relative to Blacks to most dehumanization of Blacks relative to Whites. The 95% confidence interval for Trump ratings at most dehumanization of Whites relative to Blacks did not overlap with the 95% confidence interval for Trump ratings at no / equal dehumanization of Whites and Blacks. But, as indicated in my later analyses, that might merely be due to the Jardina and Piston 2021 use of aofmanwb as a continuous predictor: the aforementioned inference wasn't supported using 83.4% confidence intervals when the aofmanwb predictor was trichotomized as described above.

3. Regarding differences between Qualtrics datasets posted to the Jardina and Piston 2021 Dataverse page, the Stata command "tab race latino, mi" returns 980 respondents who selected "White" for the race item and "No" for the Latino item in the version 1 Qualtrics dataset, but returns 992 respondents who selected "White" for the race item and "No" for the Latino item in the version 2 Qualtrics dataset.

Both version 1 and version 2 of the Qualtrics datasets contain exactly one observation with a 1949 birth year and a state of Missouri. In both datasets, this observation has codes that indicate a White non-Latino neither-liberal-nor-conservative male Democrat with some college but no degree who has an income of $35,000 to $39,999. That observation has values of 100 for aofmanvinc_1 and 100 for aofmanvinc_4 in the version 2 Qualtrics dataset, but, in the version 1 Qualtrics dataset, that observation has no numeric values for aofmanvinc_1, aofmanvinc_4, or any other variable starting with "aofman".

I haven't yet received an explanation about this from Jardina and/or Piston.

4. Below is a description of more checking about whether aofmanwb is correctly interpreted above, given that the Dataverse page for Jardina and Piston 2021 doesn't have a codebook.

I dropped all cases in the original dataset not coded race==1 and latino==2. Case 7 in the version 2 dataset is from New York, born in 1979, has an aofmanpic_1 of 84 , and an aofmanpic_4 of 92; this matches Case 7 in the version 1 dataset when dropping aforementioned cases. Case 21 in the version 1 dataset is from South Carolina, born in 1966, has an aofmanvinc_1 of 79, and an aofmanvinc_4 of 75; this matches Case 21 in the version 2 dataset when dropping aforementioned cases. Case 951 in the version 1 dataset is from Georgia, born in 1992, has an aofmannopi_1 of 77, and an aofmannopi_4 of 65; this matches case *964* in the version 2 dataset when dropping aforementioned cases.

5. From what I can tell, for anyone interested in analyzing the data, thermind_2 in the version 2 dataset is the feeling thermometer about Donald Trump, and thermind_4 is the feeling thermometer about Barack Obama.

6. Stata code and output from my analysis.

Tagged with: ,

The Monkey Cage recently published "Nearly all NFL head coaches are White. What are the odds?" [archived], by Bethany Lacina.

Lacina reported analyses that compared observed racial percentages of NFL head coaches to benchmark percentages that are presumably intended to represent what racial percentages of NFL head coaches would occur absent racial bias. For example, Lacina compared the percentage of Whites among NFL head coaches hired since February 2021 (8 of 10, or 80%) to the percentage of Whites among the set of NFL offensive coordinators, defensive coordinators, and recently fired head coaches (which was between 70% and 80% White).

Lacina indicated that:

If the hiring process did not favor White candidates, the chances of hiring eight White people from that pool is only about one in four — or plus-322 in sportsbook terms.

I think that Lacina might have reported the probability that *exactly* eight of the ten recent NFL coach hires were White. But for assessing unfair bias favoring White candidates, it makes more sense to report the probability that *at least* eight of the ten recent NFL coach hires were White: that probability is 38% using a 70% White pool and is 67% using an 80% White pool. See Notes 1 through 3 below.

---

Lacina also conducted an analysis for the one Black NFL head coach among the 14 NFL head coaches in 2021 to 2022 who were young enough to have played in the NCAA between 1999 and 2007, given that demographic data from her source were available starting in 1999. Benchmark percentages were 30% Black from NCAA football players and 44% Black from NCAA Division I football players.

The correctness of Lacina's calculations for this analysis doesn't seem to matter, because the benchmark does not seem to be a reasonable representation of how NFL head coaches are selected. For example, quarterback is the most important player position, and quarterbacks presumably need to know football strategy relatively well compared to players at most or all other positions, so I think that the per capita probability of a college quarterback becoming an NFL head coach is likely nontrivially higher than the per capita probability of players from other positions becoming an NFL head coach; however, Lacina's benchmark doesn't adjust for player position.

---

None of the above analysis should be interpreted to suggest that selection of NFL head coaches has been free from racial bias. But I think that it's reasonable to suggest that the Lacina analysis isn't very informative either way.

---

NOTES

1. Below is R code for a simulation that returns a probability of about 24%, for the probability that *exactly* eight of ten candidates are White, drawn without replacement from a candidate pool of 32 offensive coordinators and 32 defensive coordinators that is overall 70% White:

SET  <- c(rep_len(1,45),rep_len(0,19))
LIST <- c()
for (i in 1:100000){
   LIST[i] <- sum(sample(SET,10,replace=F))
}
table(LIST)
length(LIST[LIST==8])/length(LIST)

The probability is about 32% if the pool of 64 is 80% White. Adding in a few recently fired head coaches doesn't change the percentage much.

2. In reality, 8 White candidates were hired for the 10 NFL head coaching positions. So how do we assess the extent to which this observed result suggests unfair bias in favor of White candidates? Let's first get results from the simulation...

For my 100,000-run simulation using the above code and a random seed of 123, the simulation produced exactly zero White head coaches zero times, exactly 1 White head coach 5 times, exactly 2 White head coaches 52 times, exactly 3 White head coaches 461 times, exactly 4 White head coaches 2654 times, exactly 5 White head coaches 9255 times, exactly 6 White head coaches 20987 times, exactly 7 White head coaches 29307 times, exactly 8 White head coaches 24246 times, exactly 9 White head coaches 10978 times, and exactly 10 White head coaches 2055 times.

The simulation indicated that, if candidates were randomly drawn from a 70% White pool, exactly 8 of 10 coaches would be White about 24% of the time (24,246/100,000). This 8-of-10 result represents a selection of candidates from the pool that is perfectly fair with no evidence of bias for *or against* White candidates.

The 8-of-10 result would be the proper focus if our interest were bias for *or against* White candidates. But the Lacina post didn't seem concerned about evidence of bias against White candidates, so the 9 White of 10 simulation result and the 10 White of 10 simulation result should be added to the totals to get 37%: the 9 of 10 and 10 of 10 represent simulated outcomes in which White candidates were underrepresented in reality relative to that outcome from the simulation. So the 8 of 10 represents no bias and the 9 of 10 and the 10 of 10 represent bias against Whites, so that everything else represents bias favoring Whites.

3. Below is R code for a simulation that returns a probability of about 37%, for the probability that *at least* eight of ten candidates are White, drawn with replacement from a candidate pool of 32 offensive coordinators and 32 defensive coordinators that is overall 70% White:

SET <- c(rep_len(1,45),rep_len(0,19))
LIST <- c()
for (i in 1:100000){
   LIST[i] <- sum(sample(SET,10,replace=F))
}
table(LIST)
length(LIST[LIST>=8])/length(LIST)

---

UPDATE

I corrected some misspellings of "Lacinda" to "Lacina" in the post.

---

UPDATE 2 (March 18, 2022)

Bethany Lacina discussed with me her calculation. She indicated that she did calculate at least eight of ten, but she used a joint probability method that I don't think is correct because random error would bias the inference toward unfair selection of coaches by race. Given the extra information that Bethany provided, here is a revised calculation that produces a probability of about 60%:

# In 2021: 2 non-Whites hired of 6 hires.
# In 2022: 0 non-Whites hired of 4 hires (up to the point of the calculation).
# The simulation below is for the probability that at least 8 of the 10 hires are White.

SET.2021 <- c(rep_len(0,12),rep_len(1,53)) ## 1=White candidate
SET.2022 <- c(rep_len(0,20),rep_len(1,51)) ## 1=White candidate
LIST <- c()

for (i in 1:100000){
DRAW.2021 <- sum(sample(SET.2021,6,replace=F)) 
DRAW.2022 <- sum(sample(SET.2022,4,replace=F)) 
LIST[i] <- DRAW.2021 + DRAW.2022
}

table(LIST)
length(LIST[LIST>=8])/length(LIST)
Tagged with: , ,

Political Behavior recently published Filindra et al 2022 "Beyond Performance: Racial Prejudice and Whites' Mistrust of Government". Hypothesis 1 is the expectation that "...racial prejudice (anti-Black stereotypes) is a negative and significant predictor of trust in government".

Filindra et al 2022 limits the analysis to White respondents and measures anti-Black stereotypes by combining responses to available items in which respondents rate Blacks on seven-point scales, ranging from hardworking to lazy, and/or from peaceful to violent, and/or from intelligent to unintelligent. The data include items about how respondents rate Whites on these scales, but Filindra et al 2022 didn't use these responses to measure anti-Black stereotyping.

But information about how respondents rate Whites is useful for measuring anti-Black stereotyping. For example, a respondent who rates all racial groups at the midpoint of a stereotype scale hasn't indicated an anti-Black stereotype; this respondent's rating about Blacks doesn't differ from the respondent's rating about other racial groups, and it's not clear to me why rating Blacks equal to all other racial groups would be a moderate amount of "prejudice" in this case.

But this respondent who rated all racial groups equally on the stereotype scales nonetheless falls halfway along the Filindra et al 2022 measure of "negative Black stereotypes", in the same location as a respondent who rated Blacks at the midpoint of the scale and rated all other racial groups at the most positive end of the scale.

---

I think that this flawed measurement means that more analyses need to be conducted to know whether the key Filindra et al 2022 finding is merely due to the flawed measure of racial prejudice. Moreover, I think that more analyses need to be conducted to know whether Filindra et al 2022 overlooked evidence of the effect of prejudice against other racial groups.

Filindra et al 2022 didn't indicate whether their results held when using a measure of anti-Black stereotypes that placed respondents who rated all racial groups equally into a different category than respondents who rated Blacks less positively than all other racial groups and a different category than respondents who rated Blacks more positively than all other racial groups. Filindra et al 2022 didn't even report results when their measure of anti-White stereotypes was included in the regressions estimating the effect of anti-Black stereotypes.

A better review process might have produced a Filindra et al 2022 that resolved questions such as: Is the key Filindra et al 2022 finding merely because respondents who don't trust the government rate *all* groups relatively low on stereotype scales? Is the key finding because anti-Black stereotypes and anti-White stereotypes and anti-Hispanic stereotypes and anti-Asian stereotypes *each* reduce trust in government? Or are anti-Black stereotypes the *only* racial stereotypes that reduce trust in government?

Even if anti-Black stereotypes among Whites is the most important combination of racial prejudice and respondent demographics, other combinations of racial stereotype and respondent demographics are important enough to report on and can help readers better understand racial attitudes and their consequences.

---

NOTES

1. Filindra et al 2022 did note that:

Finally, another important consideration is the possibility that other outgroup attitudes or outgroup-related policy preferences may also have an effect on public trust.

That's sort of close to addressing some of the alternate explanations that I suggested, but the Filindra et al 2022 measure for this is a measure about immigration *policy* and not, say, the measures of stereotypes about Hispanics and about Asians that are included in the data.

2. Filindra et al 2022 suggested that:

Future research should focus on the role of attitudes towards immigrants and other racial groups—such as Latinos— and ethnocentrism more broadly in shaping white attitudes toward government.

But it's not clear to me why such analyses aren't included in Filindra et al 2022.

Maybe the expectation is that another publication should report results that include the measures of anti-Hispanic stereotypes and anti-Asian stereotypes in the ANES data. And another publication should report results that include the measures of anti-White stereotypes in the ANES data. And another publication should report results that include or focus on respondents in the ANES data who aren't White. But including all this in Filindra et al 2022 or its supplemental information would be more efficient and could produce a better understanding of political attitudes.

3. Filindra et al 2022 indicated that:

All variables in the models are rescaled on 0–1 scales consistent with the nature of the original variable. This allows us to conceptualize the coefficients as maximum effects and consequently compare the size of coefficients across models.

Scaling all predictors to range from 0 to 1 means that comparison of coefficients likely produces better inferences than if the predictors were on different scales, but differences in 0-to-1 coefficients can also be due to differences in the quality of the measurement of the underlying concept, as discussed in this prior post.

4. Filindra et al 2022 justified not using a differenced stereotype measure, citing evidence such as (from footnote 2):

Factor analysis of the Black and white stereotype items in the ANES confirms that they do not fall on a single dimension.

The reported factor analysis was on ANES 2020 data and included a measure of "lazy" stereotypes about Blacks, a measure of "violent" stereotypes about Blacks, a feeling thermometer about Blacks, a measure of "lazy" stereotypes about Whites, a measure of "violent" stereotypes about Whites, and a feeling thermometer about Whites.[*] But a "differenced" stereotype measure shouldn't be constructed by combining measures like that, as if the measure of "lazy" stereotypes about Blacks is independent of the measure of "lazy" stereotypes about Whites.

A "differenced" stereotype measure could be constructed by, for example, subtracting the "lazy" rating about Whites from the "lazy" rating about Blacks, subtracting the "violent" rating about Whites from the "violent" rating about Blacks, and then summing these two differences. That measure could help address the alternate explanation that the estimated effect for rating Blacks low is because respondents who rate Blacks low also rate all other groups low. That measure could also help address the concern that using only a measure of stereotypes about Blacks underestimates the effect of these stereotypes.

Another potential coding is a categorical measure, coded 1 for rating Blacks lower than Whites on all stereotype measures, 2 for rating Blacks equal to Whites on all stereotype measures, coded 3 for rating Blacks higher than Whites on all stereotype measures, and coded 4 for a residual category. The effect of anti-Black stereotypes could be estimated as the difference net of controls between category 1 and category 2.

Filindra et al 2022 provided justifications other than the factor analysis for not using a differenced stereotype measure, but, even if you agree that stereotype scale ratings about Blacks should not be combined with stereotype scale ratings about Whites, the Filindra et al 2022 arguments don't preclude including their measure of anti-White prejudice as a separate predictor in the analyses.

[*] I'm not sure why the feeling thermometer responses were included in a factor analysis intended to justify not combining stereotype scale responses.

5. I think that labels for the panels of Filindra et al 2022 Figure 1 and the corresponding discussion in the text are backwards: the label for each plot in Figure 1a appears to be "Negative Black Stereotypes", but the Figure 1a label is "Public Trust"; the label for each plot in Figure 1b appears to be "Level of Trust in Govt", but the Figure 1b label is "Anti-Black stereotypes".

My histogram of the Filindra et al 2022 measure of anti-Black stereotypes for the ANES 2020 Time Series Study looks like their 2020 plot in Figure 1a.

6. I'm not sure what the second sentence is supposed to mean, from this part of the Filindra et al 2022 conclusion:

Our results suggest that white Americans' beliefs about the trustworthiness of the federal government have become linked with their racial attitudes. The study shows that even when racial policy preferences are weakly linked to trust in government racial prejudice does not. Analyses of eight surveys...

7. Data source for my analysis: American National Election Studies. 2021. ANES 2020 Time Series Study Full Release [dataset and documentation]. July 19, 2021 version. www.electionstudies.org.

Tagged with: , , , , ,

The recent Rhodes et al 2022 Monkey Cage post indicated that:

...as [Martin Luther] King [Jr.] would have predicted, those who deny the existence of racial inequality are also those who are most willing to reject the legitimacy of a democratic election and condone serious violations of democratic norms.

Regarding this inference about the legitimacy of a democratic election, Rhodes et al 2022 reported results for an item that measured perceptions about the legitimacy of Joe Biden's election as president in 2020. But a potential confound is that reported perceptions of the legitimacy of the U.S. presidential election in 2020 are due to who won that election and are not about elections per se. One way to address this confound is to use a measure of reported perceptions of the legitimacy of the U.S. presidential election *in 2016*, which Donald Trump won.

I checked data from the Democracy Fund Voter Study Group VOTER survey for responses to the items below, which can help address this confound:

[from 2016 and 2020] Over the past few years, Blacks have gotten less than they deserve.

[from 2016] How confident are you that the votes in the 2016 election across the country were accurately counted?

[from 2020] How confident are you that votes across the United States were counted as voters intended in the elections this November?

Results are below:

The dark columns are for respondents who strongly disagreed that Blacks have gotten less than they deserve, so that these respondents can plausibly be described as denying the existence of unfair racial inequality. The light columns are for respondents who strongly agreed that Blacks have gotten less than they deserve, so that these respondents can plausibly be described as most strongly asserting the existence of unfair racial inequality.

Comparison of the 2020 column for "strongly disagree" to the 2020 column for "strongly agree" suggests that, as expected based on Rhodes et al 2022, skepticism about votes in 2020 being counted accurately was more common among respondents who most strongly denied the existence of unfair racial inequality than among respondents who most strongly asserted the existence of unfair racial inequality.

But comparison of the 2016 column for "strongly disagree" to the 2016 column for "strongly agree" suggests that the general phrasing of "those who deny the existence of racial inequality are also those who are most willing to reject the legitimacy of a democratic election" does not hold for every election, such as the presidential election immediately prior to the election that was the focus of the relevant item in Rhodes et al 2022.

---

NOTE

1. Data source. Stata do file. Stata output. Code for the R plot.

Tagged with: , , ,

The British Journal of Political Science published Jardina and Piston 2021 "The Effects of Dehumanizing Attitudes about Black People on Whites' Voting Decisions".

---

1.

Jardina and Piston 2021 used the "Ascent of Man" measure of dehumanization, which I have discussed previously. Jardina and Piston 2021 subtracted participant responses to the 0-to-100 measure of perceptions of how evolved Blacks are from participant responses to the 0-to-100 measure of perceptions of how evolved Whites are, and placed this difference on a 0-to-1 scale.

Jardina and Piston 2021 placed this 0-to-1 measure of dehumanization into an OLS regression with controls, took the resulting coefficient, such as 0.60 in Table 1 (for which the p-value is less than p=0.001), and halved that coefficient, so that, for the 0.60 coefficient, moving from the neutral point on the dehumanization scale to the highest measured dehumanizing about Blacks relative to Whites accounted for 0.30 points on the outcome variable scale, which for this estimate was a 0-to-100 feeling thermometer rating about Donald Trump placed on a 0-to-1 scale.

However, this research design means that 0.30 points on a 0-to-1 scale is also the corresponding estimate of how much dehumanizing about Whites relative to Blacks affected feeling thermometer ratings about Donald Trump. Jardina and Piston 2021 thus did not permit the estimate of the marginal effect of dehumanizing Blacks to differ from the estimate of the marginal effect of dehumanizing Whites.

I discussed this before (1, 2), but it's often better to not assume a linear association for continuous predictors (citation to Hainmueller et al. 2019).

Figure 3 of Jardina and Piston 2021 censors the estimated effect of dehumanizing Whites, by plotting predicted probabilities of a Trump vote among Whites but restricting the range of dehumanization to run from neutral (0.5 on the dehumanization measure) to most dehumanization about Blacks (1.0 on the measure).

---

2.

Jardina and Piston 2021 claimed that "Finally, our findings serve as a warning about the nature of Whites' racial attitudes in the contemporary United States" (p. 20). But Jardina and Piston 2021 did not report any evidence that Whites' attitudes in this area differ from non-Whites' attitudes in this area. That seems like a relevant question for researchers interested in understanding racial attitudes.

If I'm reading page 9 correctly, Jardina and Piston 2021 reported on original survey data from a 2016 YouGov two-wave panel of 600 non-Hispanic Whites, a 2016 Qualtrics survey of 500 non-Hispanic Whites, a 2016 GfK survey of 2,000 non-Hispanic Whites, and another 2016 YouGov two-wave panel of 600 non-Hispanic Whites.

The funding statement in Jardina and Piston 2021 acknowledges only Duke University and Boston University. That's a lot of internal resources for surveys conducted in a single year, and I don't think that Jardina and Piston 2021 limiting the analysis to Whites can be reasonably attributed to a lack of resources.

The Qualtrics_BJPS.dta dataset at the Dataverse page for Jardina and Piston 2021 has cases for 1,125 Whites, 242 Blacks, 88 Asians, 45 Native Americans, and 173 coded Other, with respective non-Latino cases of 980, 213, 83, 31, and 38. The Dataverse page doesn't have a codebook for that dataset, and the relevant variable names in that dataset aren't clear to me, but I'll plan to post a follow-up here if I get sufficient information to analyze responses from non-White participants.

---

3.

Jardina and Piston 2021 suggested (p. 4) that:

We also suspect that recent trends in the social and natural sciences are legitimizing beliefs about biological differences between racial groups in ways that reinforce a propensity to dehumanize Black people.

This passage did not mention the Jardina and Piston 2015/6 TESS experiment in which participants were assigned to a control condition, or a condition with a reading entitled "Genes May Cause Racial Difference in Heart Disease", or a condition with a reading entitled "Social Conditions May Cause Racial Difference in Heart Disease".

My analysis of data for that experiment found a p<0.01 difference between treatment groups in mean responses to an item about whether there are biological differences between Blacks and Whites, which suggests that the treatment worked. But the treatment didn't produce a detectable effect on key outcomes, according to the description of results on the page for the Jardina and Piston 2015/6 TESS experiment, which indicates that "Experimental conditions are not statistically associated with the distribution of responses to the outcome variables". This null result seems to be relevant for the above quoted suspicion from Jardina and Piston 2021.

---

4.

Jardina and Piston 2021 indicated that "Dehumanization has serious consequences. It places the targets of these attitudes outside of moral consideration, ..." (p. 6). But the Jardina and Piston proposal for the 2015/6 TESS experiment had proposed that some participants be exposed to a treatment that Jardina and Piston hypothesized would increase participant "biological racism", to use a term from their proposal.

Selected passages from the proposal are below:

We hypothesize that the proportion of respondents rating whites as more evolved than blacks is highest in the Race as Genetics Condition, lower in the Control Condition, and lowest in the Race as Social Construction Condition.

...Our study will also inform scholarship on news media communication, demonstrating that ostensibly innocuous messages about race, health, and genetics can have pernicious consequences.

Exposing some participants to a treatment that the researchers hypothesized as having "pernicious consequences" seems like an interesting ethical issue that the proposal didn't discuss.

Moreover, like some other research that uses the Ascent of Man measure of dehumanization, the Jardina and Piston 2015/6 TESS experiment included the statement that "People can vary in how human-like they seem". I wonder which people this statement is meant to refer to. Nothing in the debriefing indicated that this statement was deception.

---

5.

The dataset for the Jardina and Piston 2015/6 TESS experiment includes comments from participants. I thought that comments from participants with IDs 499 and 769 were worth highlighting (the statements were cut off in the dataset):

I disliked this survey as you should ask the same questions about whites. I was not willing to say blacks were not rational but whites are not rational either. But to avoid thinking I was prejudice I had to give a higher rating. All humans a

Black people are not less evolved, 'less evolved' is a meaningless term as evolution is a constant process and the only difference is what particular adaptations a group has. I don't like to claim certainty about things of which I am unsure, a

---

NOTES

1. The Table 3 header for Jardina and Piston 2021 indicates that vote choice is the outcome for that table, but the corresponding note indicates that "Higher values of the dependent variable indicate greater warmth toward Trump on the 101-point feeling thermometer". Moreover, Figure 3 of Jardina and Piston 2021 plots predicted probabilities of a vote for Trump, but the figure note indicates that the figure was "based on Table 4, Model 4", which is instead about warmth toward Obama.

2. Jardina and Piston 2021 reported results for participant responses about the "dehumanizing characteristics" of "savage", "barbaric", and "lacking self-restraint, like animals", so I checked how responses to the "violent" stereotype item associated with two-party presidential vote choice in data from the ANES 2020 Time Series Study.

Results indicated that, compared to White participants who rated Whites as being as violent on average as Blacks, White participants who rated Blacks as being more violent on average than Whites were more likely to vote for Donald Trump net of controls (p<0.05). But result also indicated that, compared to White participants who rated Whites as being as violent on average as Blacks, White participants who rated Whites as being more violent on average than Blacks were less likely to vote for Donald Trump net of controls (p<0.05). See lines 105 through 107 in the output.

3. Jardina and Piston 2021 reported that, in their 2016b YouGov survey, 42% of Whites rated Whites as more evolved than Blacks (pp. 9-10). For a comparison, the Martherus et al. 2019 study about Democrat and Republican dehumanization of outparty members reported dehumanization percentages of "nearly 77%" (2018 SSI study) and "just over 80%" (2018 CCES study).

4. Data sources:

American National Election Studies. 2021. ANES 2020 Time Series Study Preliminary Release: Combined Pre-Election and Post-Election Data [dataset and documentation]. March 24, 2021 version. www.electionstudies.org.

Ashley Jardina and Spencer Piston. 2015/6. Data for: "Explaining the Prevalence of White Biological Racism against Blacks". Time-sharing Experiments for the Social Sciences. https://www.tessexperiments.org/study/pistonBR61

Ashley Jardina and Spencer Piston. 2021. Replication Data for: The Effects of Dehumanizing Attitudes about Black People on Whites' Voting Decisions, https://doi.org/10.7910/DVN/A3XIFC, Harvard Dataverse, V1, UNF:6:nNg371BCnGaWtRyNdu0Lvg== [fileUNF]
Tagged with: , , ,

The recent Mason et al. Monkey Cage post claimed that:

We found about 30 percent of Americans surveyed in 2011 reported feelings of animosity towards African Americans, Hispanics, Muslims, and the LGBTQ community. These individuals make up our MAGA faction.

But much less than 30% of Americans reported animus toward all four of these groups. In unweighted analyses using the 2011 VOTER data, the percentage that rated the group under 50 on a 0-to-100 feeling thermometer was 13% for Blacks, 17% for Latinos, 46% for Muslims, and 26% for gays and lesbians. Only about 3% rated all four groups under 50.

So how did Mason et al. get 30%? Based on the Mason et al. figure note (and my check in Stata), 30% is percentage of average ratings across all four groups that is under 50.

But I don't think that the average across variables should be used to describe responses to individual variables. I think that it would be misleading, for instance, to describe the respondent who rated Blacks at 75 and Muslims at 0 as reporting animosity toward Blacks and Muslims, especially given that the respondent rated Whites at 71 and Christians at 0.

---

Mason et al. write that:

Our research does suggest that, as long as this MAGA faction exists, politicians may be tempted to appeal to it, hoping to repeat Trump's success. In fact, using inflammatory and divisive appeals would be a rational campaign strategy, since they can animate independent voters who dislike these groups.

It seems reasonable to be concerned about politicians appealing to intolerant people, but I'm not sure that it's reasonable to limit this concern about intolerance to the MAGA faction.

Below are data from ANES 2020 Time Series Survey, of the percentage of the U.S. population that rated a set of target groups under 50 on a 0-to-100 feeling thermometer, disaggregated by partisanship:

So the coalitions that reported cold ratings about Hispanics, Blacks, gay men and lesbians, Muslims, transgender people, and illegal immigrants are disproportionately Republican (compared to Democratic), and the coalitions that reported cold ratings about rural Americans, Whites, Christians, and Christian fundamentalists are disproportionately Democratic (compared to Republican).

Democrats were more common in the data than Republicans were, so the plot above doesn't permit direct comparison of the blue bars to the red bars to assess relative frequency of cold ratings by party. To permit that assessment, the plot below indicates the percentage of Democrats and the percentage of Republicans that reported a cold rating of the indicated target group:

---

Mason et al. end their Monkey Cage post with:

But identifying this MAGA faction as both separate from and related to partisan politics can help us better understand the real conflict. When a small, intolerant faction of citizens wields disproportionate influence over nationwide governance, democracy erodes. Avoiding discussion about this group only protects its power.

But the Mason et al. Monkey Cage post names only one intolerant group -- the MAGA faction -- and avoids naming the group that is intolerant of Whites and Christians, which, by the passage above, presumably protects the power of that other intolerant group.

---

NOTES

1. Data citation: American National Election Studies. 2021. ANES 2020 Time Series Study Full Release [dataset and documentation]. July 19, 2021 version. www.electionstudies.org.

2. Link to the Mason et al. 2021 APSR letter.

3. Directions for the 2011 VOTER survey thermometer items directed respondents to "Click on the thermometer to give a rating". If this means that respondents did something like moving a widget instead of inputting a numeric rating, then I think that that might overestimate cold ratings, if some respondents try to rate at 50, instead move to a bit under 50, and then figure that 49 or so is close enough.

But this might not be a large bias: for example, the thermometer about Blacks respectively had 27, 44, 745, 376, and 212 responses for ratings of 48 through 52.

4. Draft plots:

5. Stata code for the analyses, plus: tab pid3_2011 if ft_white_2011==71 & ft_christian_2011==0 & ft_black_2011==75 & ft_muslim_2011==0

6. R data and code for the "three color" barplot.

7. R data and code for the "back-to-back" barplot.

8. R data and code for the "full sample" barplot.

9. R data and code for the "two panel" barplot.

Tagged with: , ,