Electoral Studies recently published Jardina and Stephens-Dougan 2021 "The electoral consequences of anti-Muslim prejudice". Jardina and Stephens-Dougan 2021 reported results from 2004 through 2020 ANES Time Series Studies, estimating the effect of anti-Muslim prejudice on vote choice, among White Americans, using feeling thermometer ratings and responses on stereotype scales.

Figure 1 of Jardina and Stephens-Dougan 2021 reports non-Hispanic Whites' mean feeling thermometer ratings about Muslims, Whites, Blacks, Hispanics, and Asians...but not about Christian fundamentalists, even though ANES data for each year in Figure 1 contain feeling thermometer ratings about Christian fundamentalists.

The code for Jardina and Stephens-Dougan 2021 includes a section for "*Robustness for anti christian fundamental affect", indicating an awareness of the thermometer ratings about Christian fundamentalists.

I drafted a quick report about how reported 2020 U.S. presidential vote choice associated with feeling thermometer ratings about Jews, Christians, Muslims, and Christian fundamentalists, using data from the ANES 2020 Time Series Study. Plots are below, with more detailed descriptions in the quick report.

This first plot is of the distributions of feeling thermometer ratings about the religious groups asked about, with categories such as [51/99] indicating the percentage that rated the indicated group at 51 through 99 on the thermometer:

This next plot is of how the ratings about a given religious group associated with 2020 two-party presidential vote choice for Trump, with demographic controls only, and a separate regression for ratings about each religious group:

This next plot added controls for partisanship, political ideology, and racial resentment, and put all ratings of religious groups into the same regression:

The above plot zooms in on y-axis percentages from 20 to 60. The plot in the quick report has a y-axis that runs from 0 to 100.

---

Based on a Google Scholar search, research is available about the political implications of attitudes about Christian fundamentalists, such as Bolce and De Maio 1999. I'll plan to add a discussion of this if I convert the quick report into a proper paper.

---

The technique in the quick report hopefully improves on the Jardina and Stephens-Dougan 2021 technique for estimating anti-Muslim prejudice. From Jardina and Stephens-Dougan 2021 (p. 5):

A one-unit change on the anti-Muslim affect measure results in a 16-point colder thermometer evaluation of Kerry in 2004, a 22-point less favorable evaluation of Obama in both 2008 and 2012, and a 17-point lower rating of Biden in 2020.

From what I can tell, this one-unit change is the difference between estimated support for a candidate, net of controls, comparing a 0 rating about Muslims on the feeling thermometers to a 100 rating about Muslims on the feeling thermometers, based on a regression in which the "Negative Muslim Affect" predictor was merely the set of feeling thermometer ratings about Muslims reversed and placed on a 0-to-1 scale.

If so, then the estimated effect size of anti-Muslim affect is identical to the estimated effect size of pro-Muslim affect. Or maybe Jardina and Stephens-Dougan 2021 considers rating Muslims at 100 to be indifference about Muslims, 99 indicates some anti-Muslim affect, 98 indicates a bit more anti-Muslim affect, and so on.

It seems more reasonable to me that some people are on net indifferent about Muslims, some people have on net positive absolute views about Muslims, and some people have on net negative absolute views about Muslims. So instead I coded feeling thermometer ratings for each religious group into six categories: zero (the coldest possible rating), 100 (the warmest possible rating), 1 through 49 (residual cold ratings), 50 (indifference), 51 through 99 (residual warm ratings), and non-responses.

The extreme categories of 0 and 100 are to estimate the outcome at the extremes, and the 50 category is to estimate the outcome at indifference. If the number of observations at the extremes is not sufficiently large for some predictors, it might make more sense to also collapse the extreme value into adjoining values on the same side of 50.

---

NOTES

1. Jardina and Stephens-Dougan 2021 footnote 24 has an unexpected-to-me criticism of Michael Tesler's work.

We note that our findings with respect to 2012 are not consistent with Tesler (2016a), who finds that anti-Muslim attitudes were predictive of voting for Obama in 2012. Tesler, however, does not control for economic evaluations in his vote choice models, despite the fact that attitudes toward the economy are notoriously important predictors of presidential vote choice (Vavreck 2009)...

I don't think that a regression should include a predictor merely because the predictor is known to be a good predictor of the outcome, so it's not clear to me that Tesler or anyone else should include participant economic evaluations when predicting vote choice merely because participant economic evaluations predict vote choice.

It seems plausible that a nontrivial part of participant economic evaluations are downstream from attitudes about the candidates. Tesler's co-authored Identity Crisis book has a plot (p. 208) illustrating the flip-flop by Republicans and Democrats on views of the economy from around November 2016, with a note that:

This is another reason to downplay the role of subjective economic dissatisfaction in the election: it was largely a consequence of partisan politics, not a cause of partisans' choices.

2. Jardina and Stephens-Dougan 2021 indicated that (p. 5):

The fact, however, that the effect size of anti-Muslim affect is often on par with the effect size of racial resentment is especially noteworthy, given that the construct is measured far less robustly than the multi-item measure of racial resentment.

The anti-Muslim affect measure is a reversed 0-to-100 feeling thermometer, which has 101 potential levels. Racial resentment is built from four items, with each item having five substantive options, so that would permit the creation of a measure that has 17 substantive levels, not counting any intermediate levels that might occur for participants with missing data for some but not all of the four items.

I'm not sure why it's particularly noteworthy that the estimated effect for the 101-measure scale is on par with the estimated effect for the 17-level measure. From what I can tell, these measures are not easily comparable, unless we know, for example, the percentage of participants that fell into the most extreme levels.

3. Jardina and Stephens-Dougan 2021 reviewed a lot of the research on the political implications about attitudes about Muslims. But no mention of Helbling and Traunmüller 2018, which, based on data from the UK, indicated that:

The results suggest that Muslim immigrants are not per se viewed more negatively than Christian immigrants. Instead, the study finds evidence that citizens' uneasiness with Muslim immigration is first and foremost the result of a rejection of fundamentalist forms of religiosity.

4. I have a prior post about selective reporting in the 2016 JOP article from Stephens-Dougan, the second author of Jardina and Stephens-Dougan 2021.

5. Quick report. Stata code. Stata output.

Tagged with: , ,