1.

Abrajano and Lajevardi 2021 "(Mis)Informed: What Americans Know About Social Groups and Why it Matters for Politics" reported (p. 34) that:

We find that White Americans, men, the racially resentful, Republicans, and those who turn to Fox and Breitbart for news strongly predict misinformation about [socially marginalized] social groups.

But their research design is biased toward many or all of these results, given their selection of items for their 14-item set of misinformation items. I'll focus below on left/right political bias, and then discuss apparent errors in the publication.

---

2.

Item #7 is a true/false item:

Most terrorist incidents on US soil have been conducted by Muslims.

This item will code as misinformed some participants who overestimate the percentage of U.S.-based terror attacks committed by Muslims, but won't code as misinformed any participants who underestimate that percentage.

It seems reasonable to me that persons on the political Left will be more likely than persons on the Right to underestimate the percentage of U.S.-based terror attacks committed by Muslims and that persons on the political Right will be more likely than persons on the Left to overestimate the percentage of U.S.-based terror attacks committed by Muslims, so I'll code this item as favoring the political Left.

---

Four items (#11 to #14) ask about Black/White differences in receipt of federal assistance, but phrased so that Whites are the "primary recipients" of food stamps, welfare, and social security.

But none of these items measured misinformation about receipt of federal assistance as a percentage. So participants who report that the *number* of Blacks who receive food stamps is higher than the number of Whites who receive food stamps get coded as misinformed. But participants who mistakenly think that the *percentage* of Whites who receive food stamps is higher than the percentage of Blacks who receive food stamps do not get coded as misinformed.

Table 2 of this U.S. government report indicates that, in 2018, non-Hispanic Whites were 67% of households, 45% of households receiving SNAP (food stamps), and 70% of households not receiving SNAP. Respective percentages for Blacks were 12%, 27%, and 11% and for Hispanics were 13.5%, 22%, and 12%. So, based on this, it's correct that Whites are the largest racial/ethnic group that receives food stamps on a total population basis...but it's also true that Whites are the largest racial/ethnic group that does NOT receive food stamps on a total population basis.

It seems reasonable to me that the omission of percentage versions of these three public assistance items favors the political Left, in the sense that persons on the political Left are more likely to rate Blacks higher than Whites than are persons on the political Right, or, for that matter, Independents and moderates, so that these persons on the Left would presumably be more likely than persons on the Right to prefer (and thus guess) that Whites and not Blacks are the primary recipients of federal assistance. So, by my count, that's at least four items that favor the political Left.

---

As far as I can tell, Abrajano and Lajevardi 2021 didn't provide citations to justify their coding of correct responses. But it seems to me that such citation should be a basic requirement for research that codes responses as correct, except for  obvious items such as, say, who the current Vice President is. A potential problem with this lack of citation is that it's not clear to me that some responses that Abrajano and Lajevardi 2021 coded as correct are truly correct or at least are the only responses that should be coded as correct.

Abrajano and Lajevardi 2021 coded "Whites" as the only correct response for the "primary recipients" item about welfare, but this government document indicates that, for 2018, the distribution of TANF recipients was 37.8% Hispanic, 28.9% Black, 27.2% White, 2.1% multi-racial, 1.9% Asian, 1.5% AIAN, and 0.6% NHOPI.

And "about the same" is coded as the only correct response for the item about the "primary recipients" of public housing (item #14), but Table 14 of this CRS Report indicates that, in 2017, 33% of public housing had a non-Hispanic White head of household and 43% had a non-Hispanic Black head of household. This webpage permits searching for "public housing" for different years (screenshot below), which, for 2016, indicates percentages of 45% for non-Hispanic Blacks and 29% for non-Hispanic Whites.

Moreover, it seems suboptimal to have the imprecise "about the same" response be the only correct response. Unless outcomes for Blacks and Whites are exactly the same, presumably selection of one or the other group should count as the correct response.

---

Does a political bias in the Abrajano and Lajevardi 2021 research design matter? I think that the misinformation rates are close enough so that it matters: Figure A2 indicates that the Republican/Democrat misinformation gap is less than a point, with misinformed means of 6.51 for Republicans and 5.83 for Democrats.

Ironically, Abrajano and Lajevardi 2021 Table A1 indicates that their sample was 52% Democrat and 21% Republican, so -- on the "total" basis that Abrajano and Lajevardi 2021 used for the federal assistance items -- Democrats were the "primary" partisan source of misinformation about socially marginalized groups.

---

NOTES

1. Abrajano and Lajevardi 2021 (pp. 24-25) refers to a figure that isn't in the main text, and I'm not sure where it is:

When we compare the misinformation rates across the five social groups, a number of notable patterns emerge (see Figure 2)...At the same time, we recognize that the magnitude of difference between White and Asian American's [sic] average level of misinformation (3.4) is not considerably larger than it is for Blacks (3.2), nor for Muslim American respondents, who report the lowest levels of misinformation.

Table A5 in the appendix indicates that Blacks had a lower misinformation mean than Muslims did, 5.583 compared to 5.914, so I'm not sure what the aforementioned passage refers to. The passage phrasing refers to a "magnitude of difference", but 3.4 doesn't seem to refer to a social group gap or to an absolute score for any of the social groups.

2. Abrajano and Lajevardi 2021 footnote 13 is:

Recall that question #11 is actually four separate questions, which brings us to a total of thirteen questions that comprise this aggregate measure of political misinformation.

Question 11 being four separate questions means that there are 14 questions, and Abrajano and Lajevardi 2021 refers to "fourteen" questions elsewhere (pp. 6, 17).

Abrajano and Lajevardi 2021 indicated that "...we also observe about 11% of individuals who provided inaccurate answers to all or nearly all of the information questions" (p. 24, emphasis in the original), and it seems a bit misleading to italicize "all" if no one provided inaccurate responses to all 14 items.

3. Below, I'll discuss the full set of 14 "misinformation" items. Feel free to disagree with my count, but I would be interested in an argument that the 14 items do not on net bias results toward the Abrajano and Lajevardi 2021 claim that Republicans are more misinformed than Democrats about socially marginalized groups.

For the aforementioned items, I'm coding items #7 (Muslim terror %), #11 (food stamps), #12 (welfare), and #14 (public housing) as biased in favor of the political Left, because I think that these items are phrased so that the items will catch more misinformation among the political Right than among the political Left, even though the items could be phrased to catch more misinformation among the Left than among the Right.

I'm not sure about the item about social security (#13) , so I won't code that item as politically biased. So by my count that's 4 in favor of the Left, plus 1 neutral.

Item #5 seems to be a good item, measuring whether participants know that Blacks and Latinos are more likely to live in regions with environmental problems. But it's worth noting that this item is phrased in terms of rates and not, as for the federal assistance items, as the total number of persons by racial/ethnic group. So by my count that's 4 in favor of the Left, plus 2 neutral.

Item #1 is about the number of undocumented immigrants in the United States. I won't code that item as politically biased. So by my count that's 4 in favor of the Left, plus 3 neutral.

The correct response for item #2 is that most immigrants in the United States are here legally. I'll code this item as favoring the political Left for the same reason as the Muslim terror % item: the item catches participants who overestimate the percentage of immigrants here illegally, but the item doesn't catch participants who underestimate that percentage, and I think these errors are more likely on the Right and Left, respectively. So by my count that's 5 in favor of the Left, plus 3 neutral.

Item #6 is about whether *all* (my emphasis) U.S. universities are legally permitted to consider race in admissions. It's not clear to me why it's more important that this item be about *all* U.S. universities instead of about *some* or *most* U.S. universities. I think that it's reasonable to suspect that persons on the political Right will overestimate the prevalence of affirmative action and that persons on the political Left will underestimate the prevalence of affirmative action, so by my count that's 6 in favor of the Left, plus 3 neutral.

I'm not sure that items #9 and #10 have much of a bias (number of Muslims in the United States, and the country that has the largest number of Muslims), other than to potentially favor Muslims, given that the items measure knowledge of neutral facts about Muslims. So by my count that's 6 in favor of the Left, plus 5 neutral.

I'm not sure what "social group" item #8 is supposed to be about, which is about whether Barack Obama was born in the United States. I'm guessing that a good percentage of "misinformed" responses for this item are insincere. Even if it were a good idea to measure insincere responses to test a hypothesis about misinformation, I'm not sure why it would be a good idea to not also include a corresponding item about a false claim that, like the Obama item, is known to be more likely to be accepted among the political Left, such as items about race and killings by police. So I'll up the count to 7 in favor of the Left, plus 5 neutral.

Item #4 might reasonably be described as favoring the political Right, in the sense that I think that persons on the Right would be more likely to prefer that Whites have a lower imprisonment rate than Blacks and Hispanics. But the item has this unusual element of precision ("six times", "more than twice") that isn't present in items about hazardous waste and about federal assistance, so that, even if persons on the Right stereotypically guess correctly that Blacks and Hispanics have higher imprisonment rates than Whites, these persons still might not be sure that the "six times" and "more than twice" are correct.

So even though I think that this item (#4) can reasonably be described as favoring the political Right, I'm not sure that it's as easy for the Right to use political preferences to correctly guess this item as it is for the Left to use political preferences to correctly guess the hazardous waste item and the federal assistance items. But I'll count this item as favoring the Right, so by my count that's 7 in favor of the Left, 1 in favor of the Right, plus 5 neutral.

Item #3 is about whether the U.S. Census Bureau projects ethnic and racial minorities to be a majority in the United States by 2042. I think that it's reasonable that a higher percentage of persons on the political Left than the political Right would prefer this projection to be true, but maybe fear that the projection is true might bias this item in favor of the Right. So let's be conservative and count this item as favoring the Right, so that my coding of the overall distribution for the 14 misinformation items is: seven items favoring the Left, two items favoring the Right, and five politically neutral items.

4. The ANES 2020 Time Series Study has similar biases in its set of misinformation items.

Tagged with: , , , ,

The ANES (American National Election Studies) has released the pre- and post-election questionnaires for its 2020 Time Series Study. I thought that it would be useful or at least interesting to review the survey for political bias. I think that the survey is remarkably well done on net, but I do think that ANES 2020 contains unnecessary political bias.

---

1

ANES 2020 has two gender resentment items on the pre-election survey and two modern sexism items on the post-election survey. These four items are phrased to measure negative attitudes about women, but ANES 2020 has no parallels to these four items regarding negative attitudes about men.

Even if researchers cared about only sexism against women, parallel measures of attitudes about men would still be necessary. Evidence indicates and theory suggests that participants sexist against men would cluster at the low end of a measure of sexism against women, so that sexism against women can't properly be estimated as the change from low level to high level of these measures.

This lack of parallel items about men will plausibly produce a political bias in research that uses these four items as measures of sexism, because, while a higher percentage of Republicans than of Democrats is biased against women, a higher percentage of Democrats than of Republicans is biased against men (evidence about partisanship is in in-progress research, but check here about patterns in the 2016 presidential vote).

ANES 2020 has a feeling thermometer for several racial groups, so hopefully future ANES surveys include feeling thermometers about men and women.

---

2

Another type of political bias involves inclusion of response options so that the item can detect only errors more common on the political right. Consider this post-election item labeled "misinfo":

1. Russia tried to interfere in the 2016 presidential election

2. Russia did not try to interfere in the 2016 presidential election

So the large percentage of Hillary Clinton voters who reported the belief that Russia tampered with vote tallies to help Donald Trump don't get coded as misinformed on this misinformation item about Russian interference. The only error that the item can detect is underestimating Russian interference.

Another "misinfo" example:

Which of these two statements do you think is most likely to be true?

1. World temperatures have risen on average over the last 100 years.

2. World temperatures have not risen on average over the last 100 years.

The item permits climate change "deniers" to be coded as misinformed, but does not permit coding as misinformed "alarmists" who drastically overestimate how much the climate has changed over the past 100 years.

Yet another "misinfo" example:

1. There is clear scientific evidence that the anti-malarial drug hydroxychloroquine is a safe and effective treatment for COVID-19.

2. There is not clear scientific evidence that the anti-malarial drug hydroxychloroquine is a safe and effective treatment for COVID-19.

In April 2020, the FDA indicated that "Hydroxychloroquine and chloroquine...have not been shown to be safe and effective for treating or preventing COVID-19", so the "deniers" who think that there is zero evidence available to support HCQ as a covid-19 treatment will presumably not be coded as "misinformed".

One more example (not labeled "misinfo"), from the pre-election survey:

During the past few months, would you say that most of the actions taken by protestors to get the things they want have been violent, or have most of these actions by protesters been peaceful, or have these actions been equally violent and peaceful?

[If the response is "mostly violent" or "mostly peaceful":]

Have the actions of protestors been a lot more or only a little more [violent/peaceful]?

I think that this item might refer to the well-publicized finding that "about 93% of racial justice protests in the US have been peaceful", so that the correct response combination is "mostly peaceful"/"a lot more peaceful" and, thus, the only error that the item permits is overestimating how violent the protests were.

For the above items, I think that the response options disfavor the political right, because I expect that a higher percentage of persons on the political right than the political left will deny Russian interference in the 2016 presidential election, deny climate change, overestimate the evidence for HCQ as a covid-19 treatment, and overestimate how violent recent pre-election protests were.

But I also think that persons on the political left will be more likely than persons on the political right to make the types of errors that the items do not permit to be measured, such as overestimating climate change over the past 100 years.

Other items marked "misinfo" involved vaccines causing autism, covid-19 being developed intentionally in a lab, and whether the Obama administration or the Trump administration deported more unauthorized immigrants during its first three years.

I didn't see an ANES 2020 item about whether the Obama administration or the Trump administration built the temporary holding enclosures ("cages") for migrant children, which I think would be similar to the deportations item, in that people not paying close attention to the news might get the item incorrect.

Maybe a convincing case could be made that ANES 2020 contains an equivalent number of items with limited response options disfavoring the political left as disfavoring the political right, but I don't think that it matters whether political bias in individual items cancels out, because any political bias in individual items is worth eliminating, if possible.

---

3

ANES 2020 has an item that I think alludes to President's Trump's phone call with the Ukrainian president. Here is a key passage from the transcript of the call:

The other thing, There's a lot of talk about Biden's son, that Biden stopped the prosecution and a lot of people want to find out about that so whatever you can do with the Attorney General would be great. Biden went around bragging that he stopped the prosecution so if you can look into it...It sounds horrible to me.

Here is an ANES 2020 item:

As far as you know, did President Trump ask the Ukrainian president to investigate President Trump's political rivals, did he not ask for an investigation, or are you not sure?

I'm presuming that the intent of the item is that a correct response is that Trump did ask for such an investigation. But, if this item refers to only Trump asking the Ukrainian president to look into a specific thing that Joe Biden did, it's inaccurate to phrase the item as if Trump asked the Ukrainian president to investigate Trump's political rivals *in general*, which is what the plural "rivals" indicates.

---

4

I think that the best available evidence indicates that immigrants do not increase the crime rate in the United States (pre-2020 citation) and that illegal immigration reduces the crime rate in the United States (pre-2020 citation). Here is an "agree strongly" to "disagree strongly" item from ANES 2020:

Immigrants increase crime rates in the United States.

Another ANES 2020 item:

Does illegal immigration increase, decrease, or have no effect on the crime rate in the U.S.?

I think that the correct responses to these items are the responses that a stereotypical liberal would be more likely to *want* to be true, compared to a stereotypical Trump supporter.

But I don't think that the U.S. violent crime statistics by race reflect the patterns that a stereotypical liberal would be more likely to want to be true, compared to a stereotypical Trump supporter.

Perhaps coincidentally, instead of an item about racial differences in violent crime rates for which responses could be correctly described as consistent or inconsistent with available mainstream research, ANES 2020 has stereotype items about how "violent" different racial groups are in general, which I think survey researchers will be much less likely to perceive to be addressed in mainstream research and will instead use to measure racism.

---

The above examples of what I think are political biases are relatively minor in comparison to the value that ANES 2020 looks like it will provide. For what it's worth, I think that the ANES is preferable to the CCES Common Content.

Tagged with: , , , ,

This Brian Schaffner post at Data for Progress indicates that, on 9 June during the 2020 protests over the death of George Floyd, only 57% of Whites and about 83% of Blacks agreed that "White people in the U.S. have certain advantages because of the color of their skin". It might be worth considering why not everyone agreed with that statement.

---

Let's check data from the Nationscape survey, focusing on the survey conducted 11 June 2020 (two days from the aforementioned Data for Progress survey) and the items that ask: "How much discrimination is there in the United States today against...", with response options of "A great deal", "A lot", "A moderate amount", "A little", and "None at all".

For rating discrimination against Blacks, 95% of Whites selected a level from "A great deal" through "A little", including missing responses in the 5%. It could be that the difference between this 95% and the Data for Progress 57% is because about 38% of Whites think that discrimination against Blacks favors only non-White non-Black persons. But the 57% Data for Progress estimate was pretty close to the 59% of Whites in the Nationscape data who rated the discrimination against Blacks higher than they rated the discrimination against Whites.

The pattern is similar for Blacks: about 83% of Blacks in the Data for Progress data agreed that "White people in the U.S. have certain advantages because of the color of their skin", and 85% of Blacks in the Nationscape data rated the discrimination against Blacks higher than the discrimination against Whites. But, in the Nationscape data, 98% of Blacks selected a level from "A great deal" through "A little" for the amount of discrimination that Blacks face in the United States today.

---

So this seems to be suggestive evidence that many people who do not agree that "White people in the U.S. have certain advantages because of the color of their skin" might not be indicating a lack of "acknowledgement of racism" in Schaffner's terms, but are rather signaling a belief closer to the idea that the discrimination against Blacks does not outweigh the discrimination against Whites, at least as measured on a five-point scale.

---

NOTES:

[1] The "certain advantages" item has appeared on the CCES; here is evidence that another CCES item does not well measure what the item presumably is supposed to measure.

[2] Data citation:

Chris Tausanovitch and Lynn Vavreck. 2020. Democracy Fund + UCLA Nationscape, October 10-17, 2019 (version 20200814). Retrieved from: https://www.voterstudygroup.org/downloads?key=e6ce64ec-a5d0-4a7b-a916-370dc017e713.

Note: "the original collectors of the data, UCLA, LUCID, and Democracy Fund, and all funding agencies, bear no responsibility for the use of the data or for interpretations or inferences based upon such issues".

[3] Code for my analysis:

* Stata code for the Data for Progress data

tab acknowledgement_1
tab starttime if wave==8
svyset [pw=nationalweight]
svy: prop acknowledgement_1 if ethnicity==1 & wave==8
svy: prop acknowledgement_1 if ethnicity==2 & wave==8

* Stata code for the Nationscape data [ns20200611.dta]

recode discrimination_blacks (1/4=1) (5 .=0), gen(discB)
recode discrimination_whites (1/4=1) (5 .=0), gen(discW)
tab discrimination_blacks discB, mi
tab discrimination_whites discW, mi

gen discBW = 0
replace discBW = 1 if discrimination_blacks < discrimination_whites & discrimination_blacks!=. & discrimination_whites!=.
tab discrimination_blacks discrimination_whites if discBW==1, mi
tab discrimination_blacks discrimination_whites if discBW==0, mi

svyset [pw=weight]

svy: prop discB if race_ethnicity==2
svy: prop discBW if race_ethnicity==2

svy: prop discB if race_ethnicity==1
svy: prop discBW if race_ethnicity==1

Tagged with: , ,

Back in 2016, SocImages tweeted a link to a post entitled "Trump Supporters Substantially More Racist Than Other Republicans". The "more racist" label refers to Trump supporters being more likely than Cruz supporters and Kasich supporters to indicate on stereotype scales that Blacks "in general" are less intelligent, more lazy, more rude, more violent, and more criminal than Whites "in general". I had a brief Twitter discussion with Philip Cohen and offered to move the discussion to a blog post. Moreover, I collected some relevant data, which is reported on in a new publication in Political Studies Review.

---

In 2017, Turkheimer, Harden, and Nisbett in Vox estimated the Black/White IQ gap to be closer to 10 points than to 15 points. Ten points would be a relatively large gap, about 2/3 of a standard deviation. Suppose that a person reads this Vox article and reads the IQ literature and, as a result, comes to believe that IQ is a valid enough measure of intelligence for it to be likely that the Black/White IQ gap reflects a true difference in mean intelligence. This person later responds to a survey, rating Whites in general one unit higher on a stereotype scale for intelligence than the person rates Blacks in general. My question, for anyone who thinks that such stereotype scale responses can be used as a measure of anti-Black animus, is:

Why is it racist for this person to rate Whites in general one unit higher than Blacks in general on a stereotype scale for intelligence?

I am especially interested in a response that is general enough to indicate whether it would be sexist against men to rate men in general higher than women in general on a stereotype scale for criminality.

Tagged with: , ,

In 2019, Michael Tesler published a Monkey Cage post subtitled "The majority of people who hold racist beliefs say they have an African American friend". Here is a description of these racist beliefs:

Not many whites in the survey took the overtly racist position of saying 'most blacks' lacked those positive attributes. The responses ranged from 9 percent of whites who said 'most blacks' aren't intelligent to 20 percent who said most African Americans aren't law-abiding or generous.

My analysis of the Pew Research Center data used in the Tesler 2019 post indicated that Tesler 2019 labeled as "overtly racist" the belief that most Blacks are not intelligent, even if a participant also indicated that most Whites are not intelligent.

In the Pew Research Center data (citation below), including Don't Knows and refusals, 118 of 1,447 Whites responded "No" to the question of whether most Blacks are intelligent, which is about 8 percent. However, 57 of the 118 Whites who responded "No" to the question of whether most Blacks are intelligent also responded "No" to the question of whether most Whites are intelligent. Thus, based on these intelligence items, 48 percent of the White participants who Tesler 2019 coded as taking an "overtly racist position" against Blacks also took a (presumably) overtly racist position against Whites. It could be that about half of the Whites who are openly racist against Blacks are also openly racist against Whites, or it could be that most or all of these 57 White participants have a nonracial belief that most people are not intelligent.

Even classification of responses of the 56 Whites who reported "No" for whether most Blacks are intelligent and "Yes" for whether most Whites are intelligent should address the literature on the distribution of IQ test scores in the United States and the possibility that at least some of these 56 Whites used the median U.S. IQ as the threshold for being intelligent.

---

I offered Michael Tesler an opportunity to reply. His reply is below:

Scholars have long disputed what constitutes racism in survey research.  Historically, these disagreements have centered around whether racial resentment items like agreeing that “blacks could be just as well of as whites if they only tried harder” are really racism or prejudice.  Because of these debates, I have avoided calling whites who score high on the racial resentment scale racists in both my academic research and my popular writing.

Yet even scholars who are most critical of the racial resentment measure, such as Ted Carmines and Paul Sniderman, have long argued that self-reported racial stereotypes are “self-evidently valid” measures of prejudice.  So, I assumed it would be relatively uncontroversial to say that whites who took the extreme position of saying that MOST BLACKS aren’t intelligent/hardworking/honest/law-abiding hold racist beliefs.  As the piece in question noted, very few whites took such extreme positions—ranging from 9% who said most blacks aren’t intelligent to 20% who said most blacks are not law-abiding.

If anything, then, the Pew measure of stereotypes used severely underestimates the extent of white racial prejudice in the country.  Professor Zigerell suggests that differencing white from black stereotypes is a better way to measure prejudice.  But this isn’t a very discerning measure in the Pew data because the stereotypes were only asked as dichotomous yes-no questions.  It’s all the more problematic in this case since black stereotypes were asked immediately before white stereotypes in the Pew survey and white respondents may have rated their own group less positively to avoid the appearance of prejudice.

In fact, Sniderman and Carmines’s preferred measure of prejudice—the difference between 7-point anti-white stereotypes and 7-point anti-black stereotypes—reveals far more prejudice than I reported from the Pew data.  In the 2016 American National Election Study (ANES), for example, 48% of whites rated their group as more hardworking than blacks, compared to only 13% in the Pew data who said most blacks are not hardworking.  Likewise, 53% of whites in the 2016 ANES rated blacks as more violent than whites and 25% of white Americans in the pooled 2010-2018 General Social Survey rated whites as more intelligent than blacks.

Most importantly, the substantive point of the piece in question—that whites with overtly racist beliefs still overwhelmingly claim they have black friends—remains entirely intact regardless of measurement.  Even if one wanted to restrict racist beliefs to only those saying most blacks are not intelligent/law-abiding AND that most whites are intelligent/law-abiding, 80%+ of these individuals who hold racist beliefs reported having a black friend in the 2009 Pew Survey.

All told, the post in question used a very narrow measure, which found far less prejudice than other valid stereotype measures, to make the point that the vast majority of whites with overtly racist views claim to have black friends.  Defining prejudice even more narrowly leads to the exact same conclusion.

I'll add a response in the comments.

---

NOTES

1. The title of the Tesler 2019 post is "No, Mark Meadows. Having a black friend doesn't mean you're not racist".

2. Data citation: Pew Research Center for the People & the Press/Pew Social & Demographic Trends. Pew Research Center Poll: Pew Social Trends--October 2009-Racial Attitudes in America II, Oct, 2009 [dataset]. USPEW2009-10SDT, Version 2. Princeton Survey Research Associates International [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, RoperExpress [distributor], accessed Aug-14-2019.

3. "White" and "Black" in the data analysis refer to non-Hispanic Whites and non-Hispanic Blacks.

4. In the Pew data, more White participants (147) reported "No" for the question of whether most Whites are intelligent, compared to the number of White participants (118) who reported "No" for the question of whether most Blacks are intelligent.

Patterns were similar among the 812 Black participants: 145 Black participants reported "No" for the question of whether most Whites are intelligent, but only 93 Black participants reported "No" for the question of whether most Blacks are intelligent.

Moreover, 76 White participants reported "Yes" for the question of whether most Blacks are intelligent and "No" for the question of whether most Whites are intelligent.

5. Stata code:

tab racethn, mi

tab q69b q70b if racethn==1, mi

tab q69b q70b if racethn==2, mi

Tagged with: , ,

The 2018 Cooperative Congressional Election Survey included two items labeled as measures of "sexism", for which respondents received five response options from "strongly agree" to "strongly disagree". One of these sexism measures is the Glick and Fiske 1996 hostile sexism statement that "Feminists are making entirely reasonable demands of men". This item was recently used in the forthcoming Schaffner 2020 article in the British Journal of Political Science.

It is not clear to me what "demands" the statement refers to. Moreover, it seems plausible that Democrats would conceptualize these demands differently than Republicans do so that, in effect, many Democrats would respond to a different item than many Republicans respond to. Democrats might be more likely to conceptualize reasonable demands such as support for equal work for equal pay, but Republicans might be more likely to conceptualize more disputable demands such as support for taxpayer-funded late-term abortions.

---

To assess whether CCES 2018 respondents were thinking only of the reasonable demand of men's support for equal work for equal pay, let's check data for the 2016 American National Election Studies Time Series Study, which asked post-election survey participants to respond to the item: "Do you favor, oppose, or neither favor nor oppose requiring employers to pay women and men the same amount for the same work?".

In weighted ANES 2016 data, 87% of participants asked that item favored requiring employers to pay women and men the same amount for the same work, including non-substantive responses, with a 95% confidence interval of [86%, 89%]. However, in weighted CCES 2018 post-election data, only 38% of participants somewhat or strongly agreed that feminists are making entirely reasonable demands of men, including non-substantive responses, with a 95% confidence interval of [37%, 39%].

So, in these weighted national samples, 87% favored requiring employers to pay women and men the same amount for the same work, but only 38% agreed that feminists are making entirely reasonable demands of men. I think that this is strong evidence that a large percentage of U.S. adults do not think of only reasonable demands when responding to the statement that "Feminists are making entirely reasonable demands of men".

---

To address the concern that the interpretation of the "demands" differs by partisanship, here are support levels by partisan identification:

Democrats

  • 92% favor requiring employers to pay women and men the same amount for the same work [2016 ANES]
  • 59% agree that feminists are making entirely reasonable demands of men [2018 CCES]
  • 33 percentage-point difference

Republicans

  • 84% favor requiring employers to pay women and men the same amount for the same work [2016 ANES]
  • 18% agree that feminists are making entirely reasonable demands of men [2018 CCES]
  • 66 percentage-point difference

So that's an 8-point Democrat/Republican gap in favoring requiring employers to pay women and men the same amount for the same work, but a 41-point Democrat/Republican gap in agreement that feminists are making entirely reasonable demands of men.

I think that this is at least suggestive evidence that a nontrivial percentage of Democrats and an even higher percentage of Republicans are not thinking of reasonable feminist demands such as support for equal work for equal pay. If it is generally true that, responding to the "feminist demands" item, Democrats on average think of different demands than Republicans think of, that seems like a poor research design, to infer sexism in politically relevant variables based on a too-vague item that different political groups interpret differently.

---

NOTES:

1. ANES 2016 citations:

The American National Election Studies (ANES). 2016. ANES 2012 Time Series Study. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2016-05-17. https://doi.org/10.3886/ICPSR35157.v1.

ANES. 2017. "User's Guide and Codebook for the ANES 2016 Time Series Study". Ann Arbor, MI, and Palo Alto, CA: The University of Michigan and Stanford University.

2. CCES 2018 citation:

Stephen Ansolabehere, Brian F. Schaffner, and Sam Luks. Cooperative Congressional Election Study, 2018: Common Content. [Computer File] Release 2: August 28, 2019. Cambridge, MA: Harvard University [producer] http://cces.gov.harvard.edu.

3. ANES 2016 Stata code:

tab V162149

tab V160502

keep if V160502==1

tab V162149

gen favorEQpay = V162149

recode favorEQpay (-9 -8 2 3=0)

tab V162149 favorEQpay, mi

svyset [pweight=V160102], strata(V160201) psu(V160202)

svy: prop favorEQpay

tab V161155

svy: prop favorEQpay if V161155==1

svy: prop favorEQpay if V161155==2

4. CCES 2018 Stata code:

tab CC18_422d tookpost, mi

tab CC18_422d tookpost, mi nol

keep if tookpost==2

tab CC18_422d, mi

gen femagree = CC18_422d

recode femagree (3/5 .=0) (1/2=1)

tab CC18_422d femagree, mi

svyset [pw=commonpostweight]

svy: prop femagree

tab CC18_421a

svy: prop femagree if CC18_421a==1

svy: prop femagree if CC18_421a==2

Tagged with: ,

This post discusses a commonly used "blatant" measure of dehumanization. Let me begin by proposing two blatant measures of dehumanization:

1. Yes or No?: Do you think that members of Group X are fully human?

2. On a scale in which 0 is not at all human and 10 is fully human, where would you rate members of Group X?

I would interpret a "No" response for the first measure and a response of any number lower than 10 for the second measure as dehumanization of members of Group X. If there are no reasonable alternate interpretation for these responses, then these are face-valid unambiguous measures of blatant dehumanization.

---

But neither above measure is the commonly used social science measure of blatant dehumanization. Instead, the the commonly used "measure of blatant dehumanization" (from Kteily et al. 2015), referred to as the Ascent measure, is below:

And here is how Kteily et al.'s 2015 described the ends of the tool (emphasis omitted):

Responses on the continuous slider were converted to a rating from 0 (least "evolved") to 100 (most "evolved")...

Note that participants are instructed to rate how "evolved" the participant considers the average member of a group to be and that these ratings are placed on a scale from "least evolved" to "most evolved", but these ratings are then interpreted as participant perceptions about the humanness of the group. This doesn't seem like a measure of blatant dehumanization if participants aren't asked to indicate their perceptions of how human the average member of a group is.

The Ascent measure is a blatant measure of dehumanization only if "human" and "evolved" are identical concepts, but these aren't identical concepts. It's possible to simultaneously believe that Bronze Age humans are fully human and that Bronze Age humans are less evolved than humans today. Moreover, I think that the fourth figure in the Ascent image is a Cro-Magnon that is classified by scientists as human, and Kteily et al. seem to agree:

...the image is used colloquially to highlight a salient distinction between early human ancestors and modern humans; that is, the full realization of cognitive ability and cultural expression

The perceived humanness of the fourth figure matters for understanding responses to the Ascent measure because much of the variation in responses occurs between the fourth figure and fifth figure (for example, see Table 1 of Kteily et al. 2015 and Note 1 below).

There is an important distinction between participants dehumanizing a group and participants rating one group lower than another group on a measure that participants interpret as indicating something other than "humanness", such as the degree of "realization of cognitive ability and cultural expression", especially because I don't think that humans need to have "the full realization of cognitive ability and cultural expression" in order to be fully human.

---

NOTES

1. The Jardina and Piston TESS study conducted in 2015 and 2016 with only non-Hispanic White participants had a Ascent measure in which 66% and 77% of unweighted responses for the respective targets of Blacks and Whites were in the 91-to-100 range.

2. I made some of the above points in 2015 in the ANES Online Commons. Lee Jussim raised issues discussed above in 2018, and I didn't find anything earlier.

3. More Twitter discussion of the Ascent measure: here with no reply, here with no reply, here with a reply, here with a reply.

Tagged with: