Back in 2016, SocImages tweeted a link to a post entitled "Trump Supporters Substantially More Racist Than Other Republicans". The "more racist" label refers to Trump supporters being more likely than Cruz supporters and Kasich supporters to indicate on stereotype scales that Blacks "in general" are less intelligent, more lazy, more rude, more violent, and more criminal than Whites "in general". I had a brief Twitter discussion with Philip Cohen and offered to move the discussion to a blog post. Moreover, I collected some relevant data, which is reported on in a new publication in Political Studies Review.

---

In 2017, Turkheimer, Harden, and Nisbett in Vox estimated the Black/White IQ gap to be closer to 10 points than to 15 points. Ten points would be a relatively large gap, about 2/3 of a standard deviation. Suppose that a person reads this Vox article and reads the IQ literature and, as a result, comes to believe that IQ is a valid enough measure of intelligence for it to be likely that the Black/White IQ gap reflects a true difference in mean intelligence. This person later responds to a survey, rating Whites in general one unit higher on a stereotype scale for intelligence than the person rates Blacks in general. My question, for anyone who thinks that such stereotype scale responses can be used as a measure of anti-Black animus, is:

Why is it racist for this person to rate Whites in general one unit higher than Blacks in general on a stereotype scale for intelligence?

I am especially interested in a response that is general enough to indicate whether it would be sexist against men to rate men in general higher than women in general on a stereotype scale for criminality.

Tagged with: , ,

In 2019, Michael Tesler published a Monkey Cage post subtitled "The majority of people who hold racist beliefs say they have an African American friend". Here is a description of these racist beliefs:

Not many whites in the survey took the overtly racist position of saying 'most blacks' lacked those positive attributes. The responses ranged from 9 percent of whites who said 'most blacks' aren't intelligent to 20 percent who said most African Americans aren't law-abiding or generous.

My analysis of the Pew Research Center data used in the Tesler 2019 post indicated that Tesler 2019 labeled as "overtly racist" the belief that most Blacks are not intelligent, even if a participant also indicated that most Whites are not intelligent.

In the Pew Research Center data (citation below), including Don't Knows and refusals, 118 of 1,447 Whites responded "No" to the question of whether most Blacks are intelligent, which is about 8 percent. However, 57 of the 118 Whites who responded "No" to the question of whether most Blacks are intelligent also responded "No" to the question of whether most Whites are intelligent. Thus, based on these intelligence items, 48 percent of the White participants who Tesler 2019 coded as taking an "overtly racist position" against Blacks also took a (presumably) overtly racist position against Whites. It could be that about half of the Whites who are openly racist against Blacks are also openly racist against Whites, or it could be that most or all of these 57 White participants have a nonracial belief that most people are not intelligent.

Even classification of responses of the 56 Whites who reported "No" for whether most Blacks are intelligent and "Yes" for whether most Whites are intelligent should address the literature on the distribution of IQ test scores in the United States and the possibility that at least some of these 56 Whites used the median U.S. IQ as the threshold for being intelligent.

---

I offered Michael Tesler an opportunity to reply. His reply is below:

Scholars have long disputed what constitutes racism in survey research.  Historically, these disagreements have centered around whether racial resentment items like agreeing that “blacks could be just as well of as whites if they only tried harder” are really racism or prejudice.  Because of these debates, I have avoided calling whites who score high on the racial resentment scale racists in both my academic research and my popular writing.

Yet even scholars who are most critical of the racial resentment measure, such as Ted Carmines and Paul Sniderman, have long argued that self-reported racial stereotypes are “self-evidently valid” measures of prejudice.  So, I assumed it would be relatively uncontroversial to say that whites who took the extreme position of saying that MOST BLACKS aren’t intelligent/hardworking/honest/law-abiding hold racist beliefs.  As the piece in question noted, very few whites took such extreme positions—ranging from 9% who said most blacks aren’t intelligent to 20% who said most blacks are not law-abiding.

If anything, then, the Pew measure of stereotypes used severely underestimates the extent of white racial prejudice in the country.  Professor Zigerell suggests that differencing white from black stereotypes is a better way to measure prejudice.  But this isn’t a very discerning measure in the Pew data because the stereotypes were only asked as dichotomous yes-no questions.  It’s all the more problematic in this case since black stereotypes were asked immediately before white stereotypes in the Pew survey and white respondents may have rated their own group less positively to avoid the appearance of prejudice.

In fact, Sniderman and Carmines’s preferred measure of prejudice—the difference between 7-point anti-white stereotypes and 7-point anti-black stereotypes—reveals far more prejudice than I reported from the Pew data.  In the 2016 American National Election Study (ANES), for example, 48% of whites rated their group as more hardworking than blacks, compared to only 13% in the Pew data who said most blacks are not hardworking.  Likewise, 53% of whites in the 2016 ANES rated blacks as more violent than whites and 25% of white Americans in the pooled 2010-2018 General Social Survey rated whites as more intelligent than blacks.

Most importantly, the substantive point of the piece in question—that whites with overtly racist beliefs still overwhelmingly claim they have black friends—remains entirely intact regardless of measurement.  Even if one wanted to restrict racist beliefs to only those saying most blacks are not intelligent/law-abiding AND that most whites are intelligent/law-abiding, 80%+ of these individuals who hold racist beliefs reported having a black friend in the 2009 Pew Survey.

All told, the post in question used a very narrow measure, which found far less prejudice than other valid stereotype measures, to make the point that the vast majority of whites with overtly racist views claim to have black friends.  Defining prejudice even more narrowly leads to the exact same conclusion.

I'll add a response in the comments.

---

NOTES

1. The title of the Tesler 2019 post is "No, Mark Meadows. Having a black friend doesn't mean you're not racist".

2. Data citation: Pew Research Center for the People & the Press/Pew Social & Demographic Trends. Pew Research Center Poll: Pew Social Trends--October 2009-Racial Attitudes in America II, Oct, 2009 [dataset]. USPEW2009-10SDT, Version 2. Princeton Survey Research Associates International [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, RoperExpress [distributor], accessed Aug-14-2019.

3. "White" and "Black" in the data analysis refer to non-Hispanic Whites and non-Hispanic Blacks.

4. In the Pew data, more White participants (147) reported "No" for the question of whether most Whites are intelligent, compared to the number of White participants (118) who reported "No" for the question of whether most Blacks are intelligent.

Patterns were similar among the 812 Black participants: 145 Black participants reported "No" for the question of whether most Whites are intelligent, but only 93 Black participants reported "No" for the question of whether most Blacks are intelligent.

Moreover, 76 White participants reported "Yes" for the question of whether most Blacks are intelligent and "No" for the question of whether most Whites are intelligent.

5. Stata code:

tab racethn, mi

tab q69b q70b if racethn==1, mi

tab q69b q70b if racethn==2, mi

Tagged with: , ,

Racial attitudes have substantially correlated with environmental policy preferences net of partisanship and ideology, such as here, here, and here. These results were from data collected in 2012 or later, so, to address the concern that this association is due to "spillover" of anti-Obama attitudes into non-racial policy areas, I checked whether the traditional four-item measure of racial resentment substantially correlated with environmental policy preferences net of partisanship and ideology in ANES data from 1986, which I think is the first time these items appeared together on an ANES survey.

I limited the sample to non-Hispanic Whites and controlled for participant gender, education, age, family income, partisanship, and ideology, and the race of the interviewer. The outcome variable concerns federal spending on improving and protecting the environment, which I coded so that 1 was "increased" and 0 was "same" or "decreased", with Don't Knows and Not Ascertaineds coded as missing; only 4 percent of respondents had indicated "decreased".

Other model variables at their means, the predicted probability of a reported preference for increased federal spending on improving and protecting the environment was 65% [54%, 76%] at the lowest level of racial resentment, but fell to 39% [31%, 47%] at the highest level of racial resentment. That's a substantial 26 percentage-point drop "caused" by racial attitudes, for anyone who thinks that such a research design permits causal inference.

---

NOTES:

1. Kinder and Sanders 1996 used racial resentment to predict non-racial attitudes (pp. 121-124), but, based on reading that section, I don't think KS96 predicted this environmental policy preference variable.

2. Data source: Warren E. Miller and the University of Michigan. Institute for Social Research. American National Election Studies. ANES 1986 Time Series Study. Inter-university Consortium for Political and Social Research [distributor].

3. Stata code and output.

4. The post title is about 1986, but some ANES 1986 interviews were conducted in Jan/Feb 1987. The key result still holds if the sample is limited to cases with an "86" year for the "Date of Interview" variable, with respective predicted probabilities of 67% and 37% (p=0.002 for racial resentment). About 4 or so dates appear to be incorrect, such as "01-04-86", "12-23-87", and "11-18-99". Code:

logit env2 RR4 i.female i.educ age i.finc i.party i.ideo i.V860037 if NHwhite==1 & substr(V860009, 7, 8)=="86"
margins, atmeans at(RR4=(0 1))

Tagged with: ,

The 2018 Cooperative Congressional Election Survey included two items labeled as measures of "sexism", for which respondents received five response options from "strongly agree" to "strongly disagree". One of these sexism measures is the Glick and Fiske 1996 hostile sexism statement that "Feminists are making entirely reasonable demands of men". This item was recently used in the forthcoming Schaffner 2020 article in the British Journal of Political Science.

It is not clear to me what "demands" the statement refers to. Moreover, it seems plausible that Democrats would conceptualize these demands differently than Republicans do so that, in effect, many Democrats would respond to a different item than many Republicans respond to. Democrats might be more likely to conceptualize reasonable demands such as support for equal work for equal pay, but Republicans might be more likely to conceptualize more disputable demands such as support for taxpayer-funded late-term abortions.

---

To assess whether CCES 2018 respondents were thinking only of the reasonable demand of men's support for equal work for equal pay, let's check data for the 2016 American National Election Studies Time Series Study, which asked post-election survey participants to respond to the item: "Do you favor, oppose, or neither favor nor oppose requiring employers to pay women and men the same amount for the same work?".

In weighted ANES 2016 data, 87% of participants asked that item favored requiring employers to pay women and men the same amount for the same work, including non-substantive responses, with a 95% confidence interval of [86%, 89%]. However, in weighted CCES 2018 post-election data, only 38% of participants somewhat or strongly agreed that feminists are making entirely reasonable demands of men, including non-substantive responses, with a 95% confidence interval of [37%, 39%].

So, in these weighted national samples, 87% favored requiring employers to pay women and men the same amount for the same work, but only 38% agreed that feminists are making entirely reasonable demands of men. I think that this is strong evidence that a large percentage of U.S. adults do not think of only reasonable demands when responding to the statement that "Feminists are making entirely reasonable demands of men".

---

To address the concern that the interpretation of the "demands" differs by partisanship, here are support levels by partisan identification:

Democrats

  • 92% favor requiring employers to pay women and men the same amount for the same work [2016 ANES]
  • 59% agree that feminists are making entirely reasonable demands of men [2018 CCES]
  • 33 percentage-point difference

Republicans

  • 84% favor requiring employers to pay women and men the same amount for the same work [2016 ANES]
  • 18% agree that feminists are making entirely reasonable demands of men [2018 CCES]
  • 66 percentage-point difference

So that's an 8-point Democrat/Republican gap in favoring requiring employers to pay women and men the same amount for the same work, but a 41-point Democrat/Republican gap in agreement that feminists are making entirely reasonable demands of men.

I think that this is at least suggestive evidence that a nontrivial percentage of Democrats and an even higher percentage of Republicans are not thinking of reasonable feminist demands such as support for equal work for equal pay. If it is generally true that, responding to the "feminist demands" item, Democrats on average think of different demands than Republicans think of, that seems like a poor research design, to infer sexism in politically relevant variables based on a too-vague item that different political groups interpret differently.

---

NOTES:

1. ANES 2016 citations:

The American National Election Studies (ANES). 2016. ANES 2012 Time Series Study. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2016-05-17. https://doi.org/10.3886/ICPSR35157.v1.

ANES. 2017. "User's Guide and Codebook for the ANES 2016 Time Series Study". Ann Arbor, MI, and Palo Alto, CA: The University of Michigan and Stanford University.

2. CCES 2018 citation:

Stephen Ansolabehere, Brian F. Schaffner, and Sam Luks. Cooperative Congressional Election Study, 2018: Common Content. [Computer File] Release 2: August 28, 2019. Cambridge, MA: Harvard University [producer] http://cces.gov.harvard.edu.

3. ANES 2016 Stata code:

tab V162149

tab V160502

keep if V160502==1

tab V162149

gen favorEQpay = V162149

recode favorEQpay (-9 -8 2 3=0)

tab V162149 favorEQpay, mi

svyset [pweight=V160102], strata(V160201) psu(V160202)

svy: prop favorEQpay

tab V161155

svy: prop favorEQpay if V161155==1

svy: prop favorEQpay if V161155==2

4. CCES 2018 Stata code:

tab CC18_422d tookpost, mi

tab CC18_422d tookpost, mi nol

keep if tookpost==2

tab CC18_422d, mi

gen femagree = CC18_422d

recode femagree (3/5 .=0) (1/2=1)

tab CC18_422d femagree, mi

svyset [pw=commonpostweight]

svy: prop femagree

tab CC18_421a

svy: prop femagree if CC18_421a==1

svy: prop femagree if CC18_421a==2

Tagged with: ,

This post discusses a commonly used "blatant" measure of dehumanization. Let me begin by proposing two blatant measures of dehumanization:

1. Yes or No?: Do you think that members of Group X are fully human?

2. On a scale in which 0 is not at all human and 10 is fully human, where would you rate members of Group X?

I would interpret a "No" response for the first measure and a response of any number lower than 10 for the second measure as dehumanization of members of Group X. If there are no reasonable alternate interpretation for these responses, then these are face-valid unambiguous measures of blatant dehumanization.

---

But neither above measure is the commonly used social science measure of blatant dehumanization. Instead, the the commonly used "measure of blatant dehumanization" (from Kteily et al. 2015), referred to as the Ascent measure, is below:

And here is how Kteily et al.'s 2015 described the ends of the tool (emphasis omitted):

Responses on the continuous slider were converted to a rating from 0 (least "evolved") to 100 (most "evolved")...

Note that participants are instructed to rate how "evolved" the participant considers the average member of a group to be and that these ratings are placed on a scale from "least evolved" to "most evolved", but these ratings are then interpreted as participant perceptions about the humanness of the group. This doesn't seem like a measure of blatant dehumanization if participants aren't asked to indicate their perceptions of how human the average member of a group is.

The Ascent measure is a blatant measure of dehumanization only if "human" and "evolved" are identical concepts, but these aren't identical concepts. It's possible to simultaneously believe that Bronze Age humans are fully human and that Bronze Age humans are less evolved than humans today. Moreover, I think that the fourth figure in the Ascent image is a Cro-Magnon that is classified by scientists as human, and Kteily et al. seem to agree:

...the image is used colloquially to highlight a salient distinction between early human ancestors and modern humans; that is, the full realization of cognitive ability and cultural expression

The perceived humanness of the fourth figure matters for understanding responses to the Ascent measure because much of the variation in responses occurs between the fourth figure and fifth figure (for example, see Table 1 of Kteily et al. 2015 and Note 1 below).

There is an important distinction between participants dehumanizing a group and participants rating one group lower than another group on a measure that participants interpret as indicating something other than "humanness", such as the degree of "realization of cognitive ability and cultural expression", especially because I don't think that humans need to have "the full realization of cognitive ability and cultural expression" in order to be fully human.

---

NOTES

1. The Jardina and Piston TESS study conducted in 2015 and 2016 with only non-Hispanic White participants had a Ascent measure in which 66% and 77% of unweighted responses for the respective targets of Blacks and Whites were in the 91-to-100 range.

2. I made some of the above points in 2015 in the ANES Online Commons. Lee Jussim raised issues discussed above in 2018, and I didn't find anything earlier.

3. More Twitter discussion of the Ascent measure: here with no reply, here with no reply, here with a reply, here with a reply.

Tagged with:

The PS: Political Science and Politics article "Fear, Institutionalized Racism, and Empathy: The Underlying Dimensions of Whites' Racial Attitudes" by Christopher D. DeSante and Candis Watts Smith reports results for four racial attitudes items from a "FIRE" battery.

I have a paper and a blog post indicating that combinations of these items substantially associate with environmental policy preferences net of controls for demographics, partisanship, and political ideology. DeSante and Smith have a paper that reported an analysis that uses combinations of these items to predict an environmental policy preference ("Support E.P.A.", in Table 3 of the paper), but results for this outcome variable are not mentioned in the DeSante and Smith 2020 PS publication. DeSante and Smith 2020 reports results for the four FIRE racial attitudes items separately, so I will do so below for environmental policy preference outcome variables, using data from the 2016 Cooperative Congressional Election Study (CCES).

---

Square brackets contain predicted probabilities from a logistic regression—net of controls for gender, education, age, family income, partisanship, and political ideology—of selecting "oppose" regarding the policy "Strengthen enforcement of the Clean Air Act and Clean Water Act even if it costs US jobs". The sample is limited to White respondents, and the estimates are weighted. The first probability in square brackets is at the highest level of measured agreement to the indicated statement on a five-point scale, with all other model predictors at their means; the second probability is for the corresponding highest level of measured disagreement to the indicated statement.

  • [38% to 56%, p<0.05] I am angry that racism exists.
  • [29% to 58%, p<0.05] White people in the U.S. have certain advantages because of the color of their skin.
  • [39% to 42%, p>0.05] I often find myself fearful of people of other races.
  • [51% to 36%, p<0.05] Racial problems in the U.S. are rare, isolated situations.

Results below are from a fractional logistic regression predicting an index of values of the four environmental policy items summed together and placed on a 0-to-1 scale:

  • [0.28 to 0.48, p<0.05] I am angry that racism exists.
  • [0.23 to 0.44, p<0.05] White people in the U.S. have certain advantages because of the color of their skin.
  • [0.28 to 0.32, p<0.05] I often find myself fearful of people of other races.
  • [0.42 to 0.26, p<0.05] Racial problems in the U.S. are rare, isolated situations.

The standard deviation for the 0-to-1 four-item environmental policy index is 0.38, so three of the four results immediately above indicate nontrivially high differences in predictions for a environmental policy preferences outcome variable that has no theoretical connection to race, which I think raises legitimate questions about whether these racial attitudes items should ever be used to estimate the causal influence of racial attitudes.

---

NOTES

1. Stata code.

2. Data source: Stephen Ansolabehere and Brian F. Schaffner, Cooperative Congressional Election Study, 2016: Common Content. [Computer File] Release 2: August 4, 2017. Cambridge, MA: Harvard University [producer] http://cces.gov.harvard.edu

Tagged with: