The British Journal of Political Science published Jardina and Piston 2021 "The Effects of Dehumanizing Attitudes about Black People on Whites' Voting Decisions".

---

1.

Jardina and Piston 2021 used the "Ascent of Man" measure of dehumanization, which I have discussed previously. Jardina and Piston 2021 subtracted participant responses to the 0-to-100 measure of perceptions of how evolved Blacks are from participant responses to the 0-to-100 measure of perceptions of how evolved Whites are, and placed this difference on a 0-to-1 scale.

Jardina and Piston 2021 placed this 0-to-1 measure of dehumanization into an OLS regression with controls, took the resulting coefficient, such as 0.60 in Table 1 (for which the p-value is less than p=0.001), and halved that coefficient, so that, for the 0.60 coefficient, moving from the neutral point on the dehumanization scale to the highest measured dehumanizing about Blacks relative to Whites accounted for 0.30 points on the outcome variable scale, which for this estimate was a 0-to-100 feeling thermometer rating about Donald Trump placed on a 0-to-1 scale.

However, this research design means that 0.30 points on a 0-to-1 scale is also the corresponding estimate of how much dehumanizing about Whites relative to Blacks affected feeling thermometer ratings about Donald Trump. Jardina and Piston 2021 thus did not permit the estimate of the marginal effect of dehumanizing Blacks to differ from the estimate of the marginal effect of dehumanizing Whites.

I discussed this before (1, 2), but it's often better to not assume a linear association for continuous predictors (citation to Hainmueller et al. 2019).

Figure 3 of Jardina and Piston 2021 censors the estimated effect of dehumanizing Whites, by plotting predicted probabilities of a Trump vote among Whites but restricting the range of dehumanization to run from neutral (0.5 on the dehumanization measure) to most dehumanization about Blacks (1.0 on the measure).

---

2.

Jardina and Piston 2021 claimed that "Finally, our findings serve as a warning about the nature of Whites' racial attitudes in the contemporary United States" (p. 20). But Jardina and Piston 2021 did not report any evidence that Whites' attitudes in this area differ from non-Whites' attitudes in this area. That seems like a relevant question for researchers interested in understanding racial attitudes.

If I'm reading page 9 correctly, Jardina and Piston 2021 reported on original survey data from a 2016 YouGov two-wave panel of 600 non-Hispanic Whites, a 2016 Qualtrics survey of 500 non-Hispanic Whites, a 2016 GfK survey of 2,000 non-Hispanic Whites, and another 2016 YouGov two-wave panel of 600 non-Hispanic Whites.

The funding statement in Jardina and Piston 2021 acknowledges only Duke University and Boston University. That's a lot of internal resources for surveys conducted in a single year, and I don't think that Jardina and Piston 2021 limiting the analysis to Whites can be reasonably attributed to a lack of resources.

The Qualtrics_BJPS.dta dataset at the Dataverse page for Jardina and Piston 2021 has cases for 1,125 Whites, 242 Blacks, 88 Asians, 45 Native Americans, and 173 coded Other, with respective non-Latino cases of 980, 213, 83, 31, and 38. The Dataverse page doesn't have a codebook for that dataset, and the relevant variable names in that dataset aren't clear to me, but I'll plan to post a follow-up here if I get sufficient information to analyze responses from non-White participants.

---

3.

Jardina and Piston 2021 suggested (p. 4) that:

We also suspect that recent trends in the social and natural sciences are legitimizing beliefs about biological differences between racial groups in ways that reinforce a propensity to dehumanize Black people.

This passage did not mention the Jardina and Piston 2015/6 TESS experiment in which participants were assigned to a control condition, or a condition with a reading entitled "Genes May Cause Racial Difference in Heart Disease", or a condition with a reading entitled "Social Conditions May Cause Racial Difference in Heart Disease".

My analysis of data for that experiment found a p<0.01 difference between treatment groups in mean responses to an item about whether there are biological differences between Blacks and Whites, which suggests that the treatment worked. But the treatment didn't produce a detectable effect on key outcomes, according to the description of results on the page for the Jardina and Piston 2015/6 TESS experiment, which indicates that "Experimental conditions are not statistically associated with the distribution of responses to the outcome variables". This null result seems to be relevant for the above quoted suspicion from Jardina and Piston 2021.

---

4.

Jardina and Piston 2021 indicated that "Dehumanization has serious consequences. It places the targets of these attitudes outside of moral consideration, ..." (p. 6). But the Jardina and Piston proposal for the 2015/6 TESS experiment had proposed that some participants be exposed to a treatment that Jardina and Piston hypothesized would increase participant "biological racism", to use a term from their proposal.

Selected passages from the proposal are below:

We hypothesize that the proportion of respondents rating whites as more evolved than blacks is highest in the Race as Genetics Condition, lower in the Control Condition, and lowest in the Race as Social Construction Condition.

...Our study will also inform scholarship on news media communication, demonstrating that ostensibly innocuous messages about race, health, and genetics can have pernicious consequences.

Exposing some participants to a treatment that the researchers hypothesized as having "pernicious consequences" seems like an interesting ethical issue that the proposal didn't discuss.

Moreover, like some other research that uses the Ascent of Man measure of dehumanization, the Jardina and Piston 2015/6 TESS experiment included the statement that "People can vary in how human-like they seem". I wonder which people this statement is meant to refer to. Nothing in the debriefing indicated that this statement was deception.

---

5.

The dataset for the Jardina and Piston 2015/6 TESS experiment includes comments from participants. I thought that comments from participants with IDs 499 and 769 were worth highlighting (the statements were cut off in the dataset):

I disliked this survey as you should ask the same questions about whites. I was not willing to say blacks were not rational but whites are not rational either. But to avoid thinking I was prejudice I had to give a higher rating. All humans a

Black people are not less evolved, 'less evolved' is a meaningless term as evolution is a constant process and the only difference is what particular adaptations a group has. I don't like to claim certainty about things of which I am unsure, a

---

NOTES

1. The Table 3 header for Jardina and Piston 2021 indicates that vote choice is the outcome for that table, but the corresponding note indicates that "Higher values of the dependent variable indicate greater warmth toward Trump on the 101-point feeling thermometer". Moreover, Figure 3 of Jardina and Piston 2021 plots predicted probabilities of a vote for Trump, but the figure note indicates that the figure was "based on Table 4, Model 4", which is instead about warmth toward Obama.

2. Jardina and Piston 2021 reported results for participant responses about the "dehumanizing characteristics" of "savage", "barbaric", and "lacking self-restraint, like animals", so I checked how responses to the "violent" stereotype item associated with two-party presidential vote choice in data from the ANES 2020 Time Series Study.

Results indicated that, compared to White participants who rated Whites as being as violent on average as Blacks, White participants who rated Blacks as being more violent on average than Whites were more likely to vote for Donald Trump net of controls (p<0.05). But result also indicated that, compared to White participants who rated Whites as being as violent on average as Blacks, White participants who rated Whites as being more violent on average than Blacks were less likely to vote for Donald Trump net of controls (p<0.05). See lines 105 through 107 in the output.

3. Jardina and Piston 2021 reported that, in their 2016b YouGov survey, 42% of Whites rated Whites as more evolved than Blacks (pp. 9-10). For a comparison, the Martherus et al. 2019 study about Democrat and Republican dehumanization of outparty members reported dehumanization percentages of "nearly 77%" (2018 SSI study) and "just over 80%" (2018 CCES study).

4. Data sources:

American National Election Studies. 2021. ANES 2020 Time Series Study Preliminary Release: Combined Pre-Election and Post-Election Data [dataset and documentation]. March 24, 2021 version. www.electionstudies.org.

Ashley Jardina and Spencer Piston. 2015/6. Data for: "Explaining the Prevalence of White Biological Racism against Blacks". Time-sharing Experiments for the Social Sciences. https://www.tessexperiments.org/study/pistonBR61

Ashley Jardina and Spencer Piston. 2021. Replication Data for: The Effects of Dehumanizing Attitudes about Black People on Whites' Voting Decisions, https://doi.org/10.7910/DVN/A3XIFC, Harvard Dataverse, V1, UNF:6:nNg371BCnGaWtRyNdu0Lvg== [fileUNF]
Tagged with: , , ,

The recent Mason et al. Monkey Cage post claimed that:

We found about 30 percent of Americans surveyed in 2011 reported feelings of animosity towards African Americans, Hispanics, Muslims, and the LGBTQ community. These individuals make up our MAGA faction.

But much less than 30% of Americans reported animus toward all four of these groups. In unweighted analyses using the 2011 VOTER data, the percentage that rated the group under 50 on a 0-to-100 feeling thermometer was 13% for Blacks, 17% for Latinos, 46% for Muslims, and 26% for gays and lesbians. Only about 3% rated all four groups under 50.

So how did Mason et al. get 30%? Based on the Mason et al. figure note (and my check in Stata), 30% is percentage of average ratings across all four groups that is under 50.

But I don't think that the average across variables should be used to describe responses to individual variables. I think that it would be misleading, for instance, to describe the respondent who rated Blacks at 75 and Muslims at 0 as reporting animosity toward Blacks and Muslims, especially given that the respondent rated Whites at 71 and Christians at 0.

---

Mason et al. write that:

Our research does suggest that, as long as this MAGA faction exists, politicians may be tempted to appeal to it, hoping to repeat Trump's success. In fact, using inflammatory and divisive appeals would be a rational campaign strategy, since they can animate independent voters who dislike these groups.

It seems reasonable to be concerned about politicians appealing to intolerant people, but I'm not sure that it's reasonable to limit this concern about intolerance to the MAGA faction.

Below are data from ANES 2020 Time Series Survey, of the percentage of the U.S. population that rated a set of target groups under 50 on a 0-to-100 feeling thermometer, disaggregated by partisanship:

So the coalitions that reported cold ratings about Hispanics, Blacks, gay men and lesbians, Muslims, transgender people, and illegal immigrants are disproportionately Republican (compared to Democratic), and the coalitions that reported cold ratings about rural Americans, Whites, Christians, and Christian fundamentalists are disproportionately Democratic (compared to Republican).

Democrats were more common in the data than Republicans were, so the plot above doesn't permit direct comparison of the blue bars to the red bars to assess relative frequency of cold ratings by party. To permit that assessment, the plot below indicates the percentage of Democrats and the percentage of Republicans that reported a cold rating of the indicated target group:

---

Mason et al. end their Monkey Cage post with:

But identifying this MAGA faction as both separate from and related to partisan politics can help us better understand the real conflict. When a small, intolerant faction of citizens wields disproportionate influence over nationwide governance, democracy erodes. Avoiding discussion about this group only protects its power.

But the Mason et al. Monkey Cage post names only one intolerant group -- the MAGA faction -- and avoids naming the group that is intolerant of Whites and Christians, which, by the passage above, presumably protects the power of that other intolerant group.

---

NOTES

1. Data citation: American National Election Studies. 2021. ANES 2020 Time Series Study Full Release [dataset and documentation]. July 19, 2021 version. www.electionstudies.org.

2. Link to the Mason et al. 2021 APSR letter.

3. Directions for the 2011 VOTER survey thermometer items directed respondents to "Click on the thermometer to give a rating". If this means that respondents did something like moving a widget instead of inputting a numeric rating, then I think that that might overestimate cold ratings, if some respondents try to rate at 50, instead move to a bit under 50, and then figure that 49 or so is close enough.

But this might not be a large bias: for example, the thermometer about Blacks respectively had 27, 44, 745, 376, and 212 responses for ratings of 48 through 52.

4. Draft plots:

5. Stata code for the analyses, plus: tab pid3_2011 if ft_white_2011==71 & ft_christian_2011==0 & ft_black_2011==75 & ft_muslim_2011==0

6. R data and code for the "three color" barplot.

7. R data and code for the "back-to-back" barplot.

8. R data and code for the "full sample" barplot.

9. R data and code for the "two panel" barplot.

Tagged with: , ,

Social Forces published Wetts and Willer 2018 "Privilege on the Precipice: Perceived Racial Status Threats Lead White Americans to Oppose Welfare Programs", which indicated that:

Descriptive statistics suggest that whites' racial resentment rose beginning in 2008 and continued rising in 2012 (figure 2)...This pattern is consistent with our reasoning that 2008 marked the beginning of a period of increased racial status threat among white Americans that prompted greater resentment of minorities.

Wetts and Willer 2018 had analyzed data from the American National Election Studies, so I was curious about the extent to which the rise in Whites' racial resentment might be due to differences in survey mode, given evidence from the Abrajano and Alvarez 2019 study of ANES data that:

We find that respondents tend to underreport their racial animosity in interview-administered versus online surveys.

---

I didn't find a way to reproduce the exact results from Wetts and Willer 2018 Supplementary Table 1 for the rise in Whites' racial resentment, but, like in that table, my analysis controlled for gender, age, education, employment status, marital status, class identification, income, and political ideology.

Using the ANES Time Series Cumulative Data File with weights for the full samples, my analysis detected p<0.05 evidence of a rise in Whites' mean racial resentment from 2008 to 2012, which matches Wetts and Willer 2018; this holds net of controls and without controls. But the p-values were around p=0.22 for the change from 2004 to 2008.

But using weights for the full samples compares respondents in 2004 and in 2008 who were only in the face-to-face mode, with respondents in 2012, some of whom were in the face-to-face mode and some of whom were in the internet mode.

Using weights only for the face-to-face mode, the p-value was not under p=0.25 for the change in Whites' mean racial resentment from 2004 to 2008 or from 2008 to 2012, net of controls and without controls. The point estimates for the 2008-to-2012 change were negative, indicating, if anything, a drop in Whites' mean racial resentment.

---

NOTES

1. For what it's worth, the weighted analyses indicated that Whites' mean racial resentment wasn't higher in 2008, 2012, or 2016, relative to 2004, and there was evidence at p<0.05 that Whites' mean racial resentment was lower in 2016 than in 2004.

2. Abrajano and Alvarez 2019 discussing their Table 2 results for feeling thermometers ratings about groups indicated that (p. 263):

It is also worth noting that the magnitude of survey mode effects is greater than those of political ideology and gender, and nearly the same as partisanship.

I was a bit skeptical that the difference in ratings about groups such as Blacks and illegal immigrants would be larger by survey mode than by political ideology, so I checked Table 2.

The feeling thermometer that Abrajano and Alvarez 2019 discussed immediately before the sentence quoted above involved illegal immigrants; that analysis had coefficients of -2.610 for internet survey mode, but coefficients of 6.613 for Liberal, -1.709 for Conservative, 6.405 for Democrat, and -8.247 for Republican. So the liberal/conservative difference is 8.322 and the Democrat/Republican difference is 14.652, compared to the survey mode difference is -2.610.

3. Dataset: American National Election Studies. 2021. ANES Time Series Cumulative Data File [dataset and documentation]. November 18, 2021 version. www.electionstudies.org

4. Data, code, and output for my analysis.

Tagged with: , , , , ,

The Journal of Race, Ethnicity, and Politics published Nelson 2021 "You seem like a great candidate, but…: Race and gender attitudes and the 2020 Democratic primary".

Nelson 2021 is an analysis of racial attitudes and gender attitudes that makes inferences about the effect of "gender attitudes" using measures that ask only about women, without any appreciation of the need to assess whether the effect of gender attitudes about women are offset by the effect of gender attitudes about men.

But Nelson 2021 has another element that I thought worth blogging about. From pages 656 and 657:

Importantly, though, I hypothesized that the respondent's race will be consequential for whether these race and gender attitudes matter—specifically, that I expect it is white respondents who are driving these relationships. To test this hypothesis, I reran all 16 logit models from above with some minor adjustments. First, I replaced the IVs "Black" and "Latina/o/x" with the dichotomous variable "white." This variable is coded 1 for those respondents who identify as white and 0 otherwise. I also added interaction terms between the key variables of interest—hostile sexism, modern sexism, and racial resentment—and "white." These interactions will help assess whether white respondents display different patterns than respondents of color...

This seems like a good research design: if, for instance, the p-value is less than p=0.05 for the "Racial resentment X White" interaction term, then we can infer that, net of controls, racial resentment associated with the outcome among White respondents differently than racial resentment associated with the outcome among respondents of color.

---

But, instead of reporting the p-value for the interaction terms, Nelson 2021 compared the statistical significance for an estimate among White respondents to the statistical significance for the corresponding estimate among respondents of color, such as:

In seven out of eight cases where racial resentment predicts the likelihood of choosing Biden or Harris, the average marginal effect for white respondents is statistically significant. In those same seven cases, the average marginal effect for respondents of color on the likelihood of choosing Biden or Harris is insignificant...

But the problem with comparing statistical significance for estimates is that a difference in statistical significance doesn't permit an inference that the estimates differ.

For example, Nelson 2021 Table A5 indicates that, for the association of racial resentment and the outcome of Kamala Harris's perceived electability, the 95% confidence interval among White respondents is [-.01, -.001]; this 95% confidence interval doesn't include zero, so that's a statistically significant estimate. The corresponding 95% confidence interval among respondents of color is [-.01, .002]; this 95% confidence interval includes zero, so that's not a statistically significant estimate.

But the corresponding point estimates are reported as -0.01 among White respondents and -0.01 among respondents of color, so there doesn't seem to be sufficient evidence to claim that these estimates differ from each other. Nonetheless, Nelson 2021 counts this as one of the seven cases referenced in the aforementioned passage.

Nelson 2021 Table 1 indicates that the sample had 906 White respondents and 466 respondents of color. The larger sample for Whites than respondents of color biases the analysis toward a better chance of detecting statistical significance among White respondents than among respondents of colors.

---

Table A5 provides sufficient evidence that some interaction terms had a p-value less than p=0.05, such as for the policy outcome for Joe Biden, with non-overlapping 95% confidence intervals for hostile sexism of [-.02, .0004] for respondents of color and [.002, .02] for White respondents.

But I'm not sure how much this matters, without evidence about how well hostile sexism measured gender attitudes among White respondents, compared to how well hostile sexism measured gender attitudes among respondents of color.

Tagged with: , ,

PLOS ONE recently published Gillooly et al 2021 "Having female role models correlates with PhD students' attitudes toward their own academic success".

Colleen Flaherty at Inside Higher Ed quoted Gillooly et al 2021 co-author Amy Erica Smith discussing results from the article. From the Flaherty story, with "she" being Amy Erica Smith:

"When we showed students a syllabus with a low percentage of women authors, men expressed greater confidence than women in their ability to do well in the class" she said. "When we showed students syllabi with more equal gender representation, men's self-confidence declined, but women and men still expressed equal confidence in their ability to do well. So making the curriculum more fair doesn't actually hurt men relative to women."

Figure 1 of Gillooly et al 2021 presented evidence of this male student backlash, with the figure note indicating that the analysis controlled for "orientations toward quantitative and qualitative methods". Gillooly et al 2021 indicated that these "orientation" measures incorporate respondent ratings of their interest and ability in quantitative methods and qualitative methods.

But the "Grad_Experiences_Final Qualtrics Survey" file indicates that these "orientation" measures appeared on the survey after respondents received the treatment. And controlling for such post-treatment "orientation" measures is a bad idea, as discussed in Montgomery et al 2018 "How Conditioning on Posttreatment Variables Can Ruin Your Experiment and What to Do about It".

The "orientation" items were located on the same Qualtrics block as the treatment and the self-confidence/self-efficacy item, so it seems possible that these "orientation" items might have been intended as outcomes and not as controls. I didn't find any preregistration that indicates the Gillooly et al plan for the analysis.

---

I used the Gillooly et al 2021 data to assess whether there is sufficient evidence that this "male backlash" effect occurs in straightforward analyses that omit the post-treatment controls. The p-value is about p=0.20 for the command...

ologit q14recode treatment2 if female==0, robust

...which tests the null hypothesis that male students' course-related self-confidence/self-efficacy as measured on the five-point scale did not differ by the difference in percentage of women authors on the syllabus.

See the output file below for more analysis. For what it's worth, the data provided sufficient evidence at p<0.05 that, among men students, the treatment affected responses to three of the four items that Gillooly et al 2021 used to construct the "orientation" controls.

---

NOTES

1. Data. Stata code. Output file.

2. Prior post discussing a biased benchmark in research by two of the Gillooly et al 2021 co-authors.

3. Figure 1 of Gillooly et al 2021 reports 76% confidence intervals to help assess a p<0.10 difference between estimates, and Figure 2 of Gillooly et al 2021 reports 84% confidence intervals to help assess a p<0.05 difference between estimates. I would be amazed if this p=0.05 / p=0.10 variation was planned before Gillooly et al analyzed the data.

Tagged with: , , , ,

This year, I have discussed several errors or flaws in recent journal articles (e.g., 1, 2, 3, 4). For some new examples, I think that Figure 2 of Cargile 2021 reported estimates for the feminine factor instead of, as labeled, the masculine factor, and Fenton and Stephens-Dougan 2021 described a "very small" 0.01 odds ratio as "not substantively meaningful":

Finally, the percent Black population in the state was also associated with a statistically significant decline in responsiveness. However, it is worth noting that this decline was not substantively meaningful, given that the odds ratio associated with this variable was very small (.01).

I'll discuss more errors or flaws in the notes below, with more blog posts planned.

---

Given that peer review and/or the editing process will miss errors that readers can catch, it seems like it would be a good idea for journal editors to get more feedback before an article is published.

For example, the Journal of Politics has been posting "Just Accepted" manuscripts before the final formatted version of the manuscript is published, which I think permits the journal to correct errors that readers catch in the posted manuscripts.

The Journal of Politics recently posted the manuscript for Baum et al. "Sensitive Questions, Spillover Effects, and Asking About Citizenship on the U.S. Census". I think that some of the results reported in the text do not match the corresponding results reported in Table 1. For example, the text (numbered p. 4) indicates that:

Consistent with expectations, we also find this effect was more pronounced for Hispanics, who skipped 4.21 points more of the questions after the Citizenship Treatment was introduced (t-statistic = 3.494, p-value is less than 0.001).

However, from what I can tell, the corresponding Table 1 result indicates a 4.49 difference, with a t-statistic of 3.674.

---

Another potential flaw in the above statement is that, from what I can tell, the t-statistic for the "more pronounced for Hispanics" claim is based on a test of whether the estimate among Hispanics differs from zero. However, the t-statistic for the "more pronounced for Hispanics" claim should instead be from a test of whether the estimate among Hispanics differs from the estimate among non-Hispanics or whatever comparison category the "more pronounced" refers to.

---

So, to the extent that these aforementioned issues are errors or flaws, maybe these can be addressed before the Journal of Politics publishes the final formatted version of the Baum et al. manuscript.

---

NOTES

1. I think that this is an error, from Lucas and Silber Mohamed 2021, with emphasis added:

Moreover, while racial sympathy may lead to some respondents viewing non-white candidates more favorably, Chudy finds no relationship between racial sympathy and gender sympathy, nor between racial sympathy and attitudes about gendered policies.

That seemed a bit unlikely to me when I read it, and, sure enough, Chudy 2020 footnote 20 indicates that:

The raw correlation of the gender sympathy index and racial sympathy index was .3 for the entire sample (n = 1,000) and .28 for whites alone (n = 751).

2. [sic] errors in Jardina and Stephens-Dougan 2021. Footnote 25:

The Stereotype items were note included on the 2020 ANES Time Series study.

...and the Section 4 heading:

Are Muslim's part of a "band of others?"

... and the Table 2 note:

2016 ANES Time Serie Study

Moreover, the note for Jardina and Stephens-Dougan 2021 Figure 1 describes the data source as: "ANES Cumulative File (face-to-face respondents only) & 2012 ANES Times Series (all modes)". But, based on the text and the other figure notes, I think that this might refer to 2020 instead of 2012.

These things happen, but I think that it's worth noting, at least as evidence against the idea that peer reviews shouldn't note grammar-type errors.

3. I discussed conditional-acceptance comments in my PS symposium entry "Left Unchecked".

Tagged with: , ,

The American Political Science Review recently published Mason et al. 2021 "Activating Animus: The Uniquely Social Roots of Trump Support".

Mason et al. 2021 measured "animus" based on respondents' feeling thermometer ratings about groups. Mason et al. 2021 reported results for a linear measure of animus, but seemed to indicate an awareness that a linear measure might not be ideal: "...it may be that positivity toward Trump stems from animus toward Democratic groups more than negativity toward Trump stems from warmth toward Democratic groups, or vice versa" (p. 7).

Mason et al. 2021 addressed this by using a quadratic term for animus. But this retains the problem that estimates for respondents at a high level of animus against a group are influenced by responses from respondents who reported less animus toward the group and from respondents who favored the group.

I think that a better strategy to measure animus is to instead compare negatively toward the groups (i.e., ratings below the midpoint on the thermometer or at a low level) to indifference (i.e., a rating at the midpoint on the thermometer). I'll provide an example below, with another example here.

---

The Mason et al. 2021 analysis used thermometer ratings of groups measured in the 2011 wave of a survey to predict outcomes measured years later. For example, one of the regressions used feeling thermometer ratings about Democratic-aligned groups as measured in 2011 to predict favorability toward Trump as measured in 2018, controlling for variables measured in 2011 such as gender, race, education, and partisanship.

That research design might be useful for assessing change net of controls between 2011 and 2018, but it's not useful for understanding animus in 2021, which I think some readers might infer from the "motivating the left" tweet from the first author of Mason et al. 2021, that:

And it's not happening for anyone on the Democratic side. Hating Christians and White people doesn't predict favorability toward any Democratic figures or the Democratic Party. So it isn't "anti-White racism" (whatever that means) motivating the left. It's not "both sides."

The 2019 wave of the survey used in Mason et al. 2021 has feeling thermometer ratings about White Christians, and, sure enough, the mean favorability rating about Hillary Clinton in 2019 differed between respondents who rated White Christians at or near the midpoint and respondents who rated White Christians under or well under the midpoint:

Even if the "motivating the left" tweet is interpreted to refer only to the post-2011 change controlling for partisanship, ideology, and other factors, it's not clear why that restricted analysis would be important for understanding what is motivating the left. It's not like the left started to get motivated only in or after 2011.

---

NOTES

1. I think that Mason et al. 2021 used "warmth" at least once discussing results from the linear measure of animus, in which "animus" or "animosity" could have been used, in the passage below from page 4, with emphasis added:

Rather, Trump support is uniquely predicted by animosity toward marginalized groups in the United States, who also happen to fall outside of the Republican Party's rank-and-file membership. For comparison, when we analyze warmth for whites and Christians, we find that it predicts support for Trump, the Republican Party, and other elites at similar levels.

It would be another flaw of a linear measure of animus if an association can be described as having been predicted by animosity or by warmth (e.g., animosity toward Whites and Christians predicts lower levels of support for Trump and other Republicans at similar levels)

2. Stata code. Dataset. R plot: data and code.

Tagged with: , , ,