The plot below is based on data from the ANES 2022 Pilot Study, plotting the percentage of particular populations that rated the in-general intelligence of Whites higher than the in-general intelligence of Blacks (black dots) and the percentage of these populations that rated the in-general intelligence of Asians higher than the in-general intelligence of Whites (white dots). For the item wording, see the notes below or page 44 of the questionnaire.

My understanding is that, based on a straightforward / naïve interpretation of educational data such as NAEP scores as good-enough measures of intelligence [*], there isn't much reason to be in the white dot and not in the black dot or vice versa. But, nonetheless, there is a gap between dots in the overall population and in certain populations.

In the plot above, estimated percentages are similar among very conservative Whites and among U.S. residents who attributed to biological differences at least some of the Black-American/Hispanic-American-vs-White-American difference in outcomes in things such as jobs and income. But similar percentages can mask inconsistencies.

For example, among U.S. residents who attributed to biological differences at least some of the Black-American/Hispanic-American-vs-White-American difference in outcomes in things such as jobs and income, about 37% rated Asians' intelligence higher than Whites' intelligence, about 34% rated Whites' intelligence higher than Blacks' intelligence, but only about 14% fell into both of these groups, as illustrated in the second panel below:

The plot below indicates corresponding comparisons for the estimated percentages that rated the in-general intelligence of Whites higher than the in-general intelligence of Blacks (black dots) and the percentage of these populations that rated the in-general intelligence of Asians higher than the in-general intelligence of Blacks (white dots).

---

[*] I can imagine reasons to not be in one or both dots, such as perceptions about the influence of past or present racial discrimination, the relative size of the gaps, flaws in the use of educational data as measures of intelligence, and imperfections in the wording of the ANES item. But I nonetheless thought that it would be interesting to check respondent ratings about racial group intelligence.

---

NOTES

1. Relevant item wording from the ANES 2022 Pilot Study:

Next, we're going to show you a seven-point scale on which the characteristics of the people in a group can be rated. In the first statement a score of '1' means that you think almost all of the people in that group tend to be intelligent.' A score of '7' means that you think most people in the group are 'unintelligent.' A score of '4' means that you think that most people in the group are not closer to one end or the other, and of course, you may choose any number in between. Where would you rate each group in general on this scale?

2. The ANES 2022 Pilot Study had a parallel item about Hispanic-Americans that I didn't analyze, to avoid complicating the presentation.

3. In the full sample, weighted, 13% rated in-general Black intelligence higher than in-general White intelligence (compared to 25% the other way), 8% rated in-general Black intelligence higher than in-general Asian intelligence (compared to 38% the other way), and 10% rated in-general White intelligence higher than in-general Asian intelligence (compared to 35% the other way). Respective equal ratings of in-general intelligence were 62% White/Black, 54% Asian/Black, and 55% Asian/White.

Respondents were coded into a separate category if the respondent didn't provide a rating of intelligence for at least one of the racial groups in a comparison, but almost all respondents provided a rating of intelligence for each racial group.

4. Plots created with R packages: tidyverse, waffle, and patchwork.

5. Data for the ANES 2022 Pilot Study. Stata code and output for my analysis.

6. An earlier draft of the first plot is below, which I didn't like as much, because I thought that it was too wide and not as visually attractive:

7. The shading in the plot below is intended to emphasize the size of the gaps between the estimates within a population, with red indicating reversal of the typical pattern:

8. Plot replacing the legend with direct labels:

9. Bonus plot, while I'm working on visualizations, with this plot comparing ratings about men and women on 0-to-100 feeling thermometers, with confidence intervals for each category, as if the category were plotted as its own percentage:

Tagged with: , , , , ,

In a prior post, I criticized the questionnaire for the ANES 2020 Time Series Study, so I want to use this post to praise the questionnaire for the ANES 2022 Pilot Study, plus add some other comments.

---

1. The pilot questionnaire has items that ask participants to rate men and women on 0-to-100 feeling thermometers, which will permit assessment of the association for negative attitudes about women and men, presuming that some of the planned 1500 respondents express such negative attitudes.

2. The pilot questionnaire has items in which response options permit underestimation of the frequency of certain types of vote fraud, with a "Never" option for items about how often in the respondent's state [1] a voter casts more than one ballot and [2] votes are cast on behalf of dead people. That happened at least once recently in Arizona (see also https://www.heritage.org/voterfraud), and I suspect that this is currently a misperception that is more common on the political left.

But it doesn't seem like a good idea to phrase the vote fraud item about the respondent's state, so that coding a response as a misperception requires checking evidence in 50 states. And I don't think there is an obvious threshold for overestimating how often, say, a voter casts more than one ballot. "Rarely" seems like an appropriate response for Arizona residents, but is "Occasionally" incorrect?

3. The pilot questionnaire has an item about the genuineness of emails on Hunter Biden's laptop in which Hunter Biden "contacted representatives of foreign governments about business deals". So I guess that can be a misinformation item that liberals are more likely to be misinformed about.

4. The pilot questionnaire has items about whether being White/Black/Hispanic/Asian "comes with advantages, disadvantages, or doesn't it matter". Based on the follow up item, these items might not permit respondents to select both "advantages" and "disadvantages", and, if so, it might be better to differentiate respondents who think that, for instance, being White has only advantages from respondents who think that being White has on net more advantages than disadvantages.

5. The pilot questionnaire permits respondents to report the belief that Black and Hispanic Americans have lower socioeconomic status than White Americans because of biological differences, but respondents can't report the belief that particular less positive outcomes for White Americans relative to another group are due to biological differences (e.g., average White American K12 student math performance relative to average Asian American K12 student math performance).

---

Overall, the 2022 pilot seems like an improvement. For one thing, the pilot questionnaire, like is common for the ANES, has feeling thermometers about Whites, Blacks, Hispanics, and Asians, so that it's possible to construct a measure of negative attitudes about each included racial/ethnic group. And the feeling thermometers for men and women permit construction of a measure of negative attitudes about men and women. For another thing, respondents can report misperceptions that are presumably more common among persons on the political left. That's more than what is permitted by a lot of similar surveys.

Tagged with: , , , , ,

Electoral Studies recently published Jardina and Stephens-Dougan 2021 "The electoral consequences of anti-Muslim prejudice". Jardina and Stephens-Dougan 2021 reported results from 2004 through 2020 ANES Time Series Studies, estimating the effect of anti-Muslim prejudice on vote choice, among White Americans, using feeling thermometer ratings and responses on stereotype scales.

Figure 1 of Jardina and Stephens-Dougan 2021 reports non-Hispanic Whites' mean feeling thermometer ratings about Muslims, Whites, Blacks, Hispanics, and Asians...but not about Christian fundamentalists, even though ANES data for each year in Figure 1 contain feeling thermometer ratings about Christian fundamentalists.

The code for Jardina and Stephens-Dougan 2021 includes a section for "*Robustness for anti christian fundamental affect", indicating an awareness of the thermometer ratings about Christian fundamentalists.

I drafted a quick report about how reported 2020 U.S. presidential vote choice associated with feeling thermometer ratings about Jews, Christians, Muslims, and Christian fundamentalists, using data from the ANES 2020 Time Series Study. Plots are below, with more detailed descriptions in the quick report.

This first plot is of the distributions of feeling thermometer ratings about the religious groups asked about, with categories such as [51/99] indicating the percentage that rated the indicated group at 51 through 99 on the thermometer:

This next plot is of how the ratings about a given religious group associated with 2020 two-party presidential vote choice for Trump, with demographic controls only, and a separate regression for ratings about each religious group:

This next plot added controls for partisanship, political ideology, and racial resentment, and put all ratings of religious groups into the same regression:

The above plot zooms in on y-axis percentages from 20 to 60. The plot in the quick report has a y-axis that runs from 0 to 100.

---

Based on a Google Scholar search, research is available about the political implications of attitudes about Christian fundamentalists, such as Bolce and De Maio 1999. I'll plan to add a discussion of this if I convert the quick report into a proper paper.

---

The technique in the quick report hopefully improves on the Jardina and Stephens-Dougan 2021 technique for estimating anti-Muslim prejudice. From Jardina and Stephens-Dougan 2021 (p. 5):

A one-unit change on the anti-Muslim affect measure results in a 16-point colder thermometer evaluation of Kerry in 2004, a 22-point less favorable evaluation of Obama in both 2008 and 2012, and a 17-point lower rating of Biden in 2020.

From what I can tell, this one-unit change is the difference between estimated support for a candidate, net of controls, comparing a 0 rating about Muslims on the feeling thermometers to a 100 rating about Muslims on the feeling thermometers, based on a regression in which the "Negative Muslim Affect" predictor was merely the set of feeling thermometer ratings about Muslims reversed and placed on a 0-to-1 scale.

If so, then the estimated effect size of anti-Muslim affect is identical to the estimated effect size of pro-Muslim affect. Or maybe Jardina and Stephens-Dougan 2021 considers rating Muslims at 100 to be indifference about Muslims, 99 indicates some anti-Muslim affect, 98 indicates a bit more anti-Muslim affect, and so on.

It seems more reasonable to me that some people are on net indifferent about Muslims, some people have on net positive absolute views about Muslims, and some people have on net negative absolute views about Muslims. So instead I coded feeling thermometer ratings for each religious group into six categories: zero (the coldest possible rating), 100 (the warmest possible rating), 1 through 49 (residual cold ratings), 50 (indifference), 51 through 99 (residual warm ratings), and non-responses.

The extreme categories of 0 and 100 are to estimate the outcome at the extremes, and the 50 category is to estimate the outcome at indifference. If the number of observations at the extremes is not sufficiently large for some predictors, it might make more sense to also collapse the extreme value into adjoining values on the same side of 50.

---

NOTES

1. Jardina and Stephens-Dougan 2021 footnote 24 has an unexpected-to-me criticism of Michael Tesler's work.

We note that our findings with respect to 2012 are not consistent with Tesler (2016a), who finds that anti-Muslim attitudes were predictive of voting for Obama in 2012. Tesler, however, does not control for economic evaluations in his vote choice models, despite the fact that attitudes toward the economy are notoriously important predictors of presidential vote choice (Vavreck 2009)...

I don't think that a regression should include a predictor merely because the predictor is known to be a good predictor of the outcome, so it's not clear to me that Tesler or anyone else should include participant economic evaluations when predicting vote choice merely because participant economic evaluations predict vote choice.

It seems plausible that a nontrivial part of participant economic evaluations are downstream from attitudes about the candidates. Tesler's co-authored Identity Crisis book has a plot (p. 208) illustrating the flip-flop by Republicans and Democrats on views of the economy from around November 2016, with a note that:

This is another reason to downplay the role of subjective economic dissatisfaction in the election: it was largely a consequence of partisan politics, not a cause of partisans' choices.

2. Jardina and Stephens-Dougan 2021 indicated that (p. 5):

The fact, however, that the effect size of anti-Muslim affect is often on par with the effect size of racial resentment is especially noteworthy, given that the construct is measured far less robustly than the multi-item measure of racial resentment.

The anti-Muslim affect measure is a reversed 0-to-100 feeling thermometer, which has 101 potential levels. Racial resentment is built from four items, with each item having five substantive options, so that would permit the creation of a measure that has 17 substantive levels, not counting any intermediate levels that might occur for participants with missing data for some but not all of the four items.

I'm not sure why it's particularly noteworthy that the estimated effect for the 101-measure scale is on par with the estimated effect for the 17-level measure. From what I can tell, these measures are not easily comparable, unless we know, for example, the percentage of participants that fell into the most extreme levels.

3. Jardina and Stephens-Dougan 2021 reviewed a lot of the research on the political implications about attitudes about Muslims. But no mention of Helbling and Traunmüller 2018, which, based on data from the UK, indicated that:

The results suggest that Muslim immigrants are not per se viewed more negatively than Christian immigrants. Instead, the study finds evidence that citizens' uneasiness with Muslim immigration is first and foremost the result of a rejection of fundamentalist forms of religiosity.

4. I have a prior post about selective reporting in the 2016 JOP article from Stephens-Dougan, the second author of Jardina and Stephens-Dougan 2021.

5. Quick report. Stata code. Stata output.

Tagged with: , ,

The plot below reports the mean rating from Whites, Blacks, Hispanics, and Asians of Whites, Blacks, Hispanics, and Asians, using data from the preliminary release of the 2020 ANES Time Series Study.

---

NOTES

1. Data source: American National Election Studies. 2021. ANES 2020 Time Series Study Preliminary Release: Combined Pre-Election and Post-Election Data [dataset and documentation]. March 24, 2021 version. www.electionstudies.org.

2. Stata code. Stata output. R code for the plots. Dataset for the R plot.

Tagged with: ,