Social Forces published Wetts and Willer 2018 "Privilege on the Precipice: Perceived Racial Status Threats Lead White Americans to Oppose Welfare Programs", which indicated that:

Descriptive statistics suggest that whites' racial resentment rose beginning in 2008 and continued rising in 2012 (figure 2)...This pattern is consistent with our reasoning that 2008 marked the beginning of a period of increased racial status threat among white Americans that prompted greater resentment of minorities.

Wetts and Willer 2018 had analyzed data from the American National Election Studies, so I was curious about the extent to which the rise in Whites' racial resentment might be due to differences in survey mode, given evidence from the Abrajano and Alvarez 2019 study of ANES data that:

We find that respondents tend to underreport their racial animosity in interview-administered versus online surveys.

---

I didn't find a way to reproduce the exact results from Wetts and Willer 2018 Supplementary Table 1 for the rise in Whites' racial resentment, but, like in that table, my analysis controlled for gender, age, education, employment status, marital status, class identification, income, and political ideology.

Using the ANES Time Series Cumulative Data File with weights for the full samples, my analysis detected p<0.05 evidence of a rise in Whites' mean racial resentment from 2008 to 2012, which matches Wetts and Willer 2018; this holds net of controls and without controls. But the p-values were around p=0.22 for the change from 2004 to 2008.

But using weights for the full samples compares respondents in 2004 and in 2008 who were only in the face-to-face mode, with respondents in 2012, some of whom were in the face-to-face mode and some of whom were in the internet mode.

Using weights only for the face-to-face mode, the p-value was not under p=0.25 for the change in Whites' mean racial resentment from 2004 to 2008 or from 2008 to 2012, net of controls and without controls. The point estimates for the 2008-to-2012 change were negative, indicating, if anything, a drop in Whites' mean racial resentment.

---

NOTES

1. For what it's worth, the weighted analyses indicated that Whites' mean racial resentment wasn't higher in 2008, 2012, or 2016, relative to 2004, and there was evidence at p<0.05 that Whites' mean racial resentment was lower in 2016 than in 2004.

2. Abrajano and Alvarez 2019 discussing their Table 2 results for feeling thermometers ratings about groups indicated that (p. 263):

It is also worth noting that the magnitude of survey mode effects is greater than those of political ideology and gender, and nearly the same as partisanship.

I was a bit skeptical that the difference in ratings about groups such as Blacks and illegal immigrants would be larger by survey mode than by political ideology, so I checked Table 2.

The feeling thermometer that Abrajano and Alvarez 2019 discussed immediately before the sentence quoted above involved illegal immigrants; that analysis had coefficients of -2.610 for internet survey mode, but coefficients of 6.613 for Liberal, -1.709 for Conservative, 6.405 for Democrat, and -8.247 for Republican. So the liberal/conservative difference is 8.322 and the Democrat/Republican difference is 14.652, compared to the survey mode difference is -2.610.

3. Dataset: American National Election Studies. 2021. ANES Time Series Cumulative Data File [dataset and documentation]. November 18, 2021 version. www.electionstudies.org

4. Data, code, and output for my analysis.

Tagged with: , , , , ,

Criminology recently published Schutten et al 2021 "Are guns the new dog whistle? Gun control, racial resentment, and vote choice".

---

I'll focus on experimental results from Schutten et al 2021 Figure 1. Estimates for respondents low in racial resentment indicated a higher probability of voting for a hypothetical candidate:

[1] when the candidate was described as Democrat, compared to when the candidate was described as a Republican,

[2] when the candidate was described as supporting gun control, compared to when the candidate was described as having a policy stance on a different issue, and

[3] when the candidate was described as not being funded by the NRA, compared to when the candidate was described as being funded by the NRA.

Patterns were reversed for respondents high in racial resentment. The relevant 95% confidence intervals did not overlap for five of the six patterns, with the exception being for the NRA funding manipulation among respondents high in racial resentment; eyeballing, it doesn't look like the p-value is under p=0.05 for that estimated difference.

---

For the estimate that participants low in racial resentment were less likely to vote for a hypothetical candidate described as being funded by the NRA than for a hypothetical candidate described as not being funded by the NRA, Schutten et al 2021 suggested that this might reflect a backlash against of "the use of gun rights rhetoric to court prejudiced voters" (p. 20). But, presuming that the content of the signal provided by the mention of NRA funding is largely or completely racial, the "backlash" pattern is also consistent with a backlash against support of a constitutional right that many participants low in racial resentment might perceive to be disproportionately used by Whites and/or rural Whites.

Schutten et al 2021 conceptualized participants low in racial resentment as "nonracists" (p. 3) and noted that "recent evidence suggests that those who score low on the racial resentment scale 'favor' Blacks (Agadjanian et al., 2021)" (p. 21), but I don't know why the quotation marks around "favor" are necessary, given that there is good reason to characterize a nontrivial percentage of participants low in racial resentment as biased against Whites: for example, my analysis of data from the ANES 2020 Time Series Study indicated that about 40% to 45% of Whites (and about 40% to 45% of the general population) that fell at least one standard deviation under the mean level of racial resentment rated Whites lower on the 0-to-100 feeling thermometers than they rated Blacks, and Hispanics, and Asians/Asian-Americans. (This is not merely respondents rating Whites on average lower than Blacks, Hispanics, and Asians/Asian-Americans, but is rating Whites lower than each of these three groups).

Schutten et al 2021 indicated that (p. 4):

Importantly, dog whistling is not an attempt to generate racial prejudice among the public but to arouse and harness latent resentments already present in many Americans (Mendelberg, 2001).

Presumably, this dog whistling can activate the racial prejudice against Whites that many participants low in racial resentment have been comfortable expressing on feeling thermometers.

---

NOTES

1. Schutten et al 2021 claimed that (p. 8):

If racial resentment is primarily principled conservatism, its effect on support for government spending should not depend on the race of the recipient.

But if racial resentment were, say, 70% principled ideology and 30% racial prejudice, racial resentment should still associate with racial discrimination due to the 30%.

And I think that it's worth considering whether racial resentment should also be described as being influenced by progressive ideology. If principled conservatism can cause participants to oppose special favors for Blacks, presumably a principled progressivism can cause participants to support special favors for Blacks. If so, it seems reasonable to also conceptualize racial resentment as the merging of principled progressivism and prejudice against Whites, given that both could presumably cause support for special favors for Blacks.

2. Schutten et al 2021 claimed that (p. 16):

The main concern about racial resentment is that it is a problematic measure of racial prejudice among conservatives but a suitable measure among nonconservatives (Feldman & Huddy, 2005).

But I think that major concerns about racial resentment are present even among nonconservatives. As I indicated in a prior blog post, I think that the best case against racial resentment has two parts. First, racial resentment captures racial attitudes in a way that is difficult if not impossible to disentangle from nonracial attitudes; that concern remains among nonconservatives, such as the possibility that a nonconservative would oppose special favors for Blacks because of a nonracial opposition to special favors.

Second, many persons at low racial resentment have a bias against Whites, and limiting the sample to nonconservatives if anything makes it more likely that the estimated effect of racial resentment is capturing the effect of bias against Whites.

3. Figure 1 would have provided stronger evidence about p<0.05 differences between estimates if plotting 83.4% confidence intervals.

4. [I deleted this comment because Justin Pickett (co-author on Schutten et al 2021) noted in review of a draft version of this post that this comment suggested an analysis that was reported in Schutten et al 2021, that an analysis be limited to participants low in racial resentment and an analysis be limited to participants high in racial resentment. Thanks to Justin for catching that.]

5. Data source for my analysis: American National Election Studies. 2021. ANES 2020 Time Series Study Preliminary Release: Combined Pre-Election and Post-Election Data [dataset and documentation]. July 19, 2021 version. www.electionstudies.org.

Tagged with: , , , ,

The American Political Science Review recently published Mason et al. 2021 "Activating Animus: The Uniquely Social Roots of Trump Support".

Mason et al. 2021 measured "animus" based on respondents' feeling thermometer ratings about groups. Mason et al. 2021 reported results for a linear measure of animus, but seemed to indicate an awareness that a linear measure might not be ideal: "...it may be that positivity toward Trump stems from animus toward Democratic groups more than negativity toward Trump stems from warmth toward Democratic groups, or vice versa" (p. 7).

Mason et al. 2021 addressed this by using a quadratic term for animus. But this retains the problem that estimates for respondents at a high level of animus against a group are influenced by responses from respondents who reported less animus toward the group and from respondents who favored the group.

I think that a better strategy to measure animus is to instead compare negatively toward the groups (i.e., ratings below the midpoint on the thermometer or at a low level) to indifference (i.e., a rating at the midpoint on the thermometer). I'll provide an example below, with another example here.

---

The Mason et al. 2021 analysis used thermometer ratings of groups measured in the 2011 wave of a survey to predict outcomes measured years later. For example, one of the regressions used feeling thermometer ratings about Democratic-aligned groups as measured in 2011 to predict favorability toward Trump as measured in 2018, controlling for variables measured in 2011 such as gender, race, education, and partisanship.

That research design might be useful for assessing change net of controls between 2011 and 2018, but it's not useful for understanding animus in 2021, which I think some readers might infer from the "motivating the left" tweet from the first author of Mason et al. 2021, that:

And it's not happening for anyone on the Democratic side. Hating Christians and White people doesn't predict favorability toward any Democratic figures or the Democratic Party. So it isn't "anti-White racism" (whatever that means) motivating the left. It's not "both sides."

The 2019 wave of the survey used in Mason et al. 2021 has feeling thermometer ratings about White Christians, and, sure enough, the mean favorability rating about Hillary Clinton in 2019 differed between respondents who rated White Christians at or near the midpoint and respondents who rated White Christians under or well under the midpoint:

Even if the "motivating the left" tweet is interpreted to refer only to the post-2011 change controlling for partisanship, ideology, and other factors, it's not clear why that restricted analysis would be important for understanding what is motivating the left. It's not like the left started to get motivated only in or after 2011.

---

NOTES

1. I think that Mason et al. 2021 used "warmth" at least once discussing results from the linear measure of animus, in which "animus" or "animosity" could have been used, in the passage below from page 4, with emphasis added:

Rather, Trump support is uniquely predicted by animosity toward marginalized groups in the United States, who also happen to fall outside of the Republican Party's rank-and-file membership. For comparison, when we analyze warmth for whites and Christians, we find that it predicts support for Trump, the Republican Party, and other elites at similar levels.

It would be another flaw of a linear measure of animus if an association can be described as having been predicted by animosity or by warmth (e.g., animosity toward Whites and Christians predicts lower levels of support for Trump and other Republicans at similar levels)

2. Stata code. Dataset. R plot: data and code.

Tagged with: , , ,

See here for a discussion of the Rice et al. 2021 mock juror experiment.

My reading of the codebook for the Rice et al. 2021 experiment is that, among other items, the pre-election survey included at least one experiment (UMA303_rand), then a battery of items measuring racism and sexism, and then at least another experiment. Then, among other items, the post-election survey included the CCES Common Content racial resentment and FIRE items, and then the mock juror experiment.

The pre-election battery of items measuring racism and sexism included three racial resentment items, a sexism battery, three stereotypes about Blacks and Whites (laziness, intelligence, and violent), and 0-to-100 feeling thermometers about Whites and about Blacks. In this post, I'll report some analyses of how well these pre-election measures predicted discrimination in the Rice et al. 2021 mock juror experiment.

---

The first plot reports results among White participants who might be expected to have a pro-Black bias. For example, the first estimate is for White participants who had the lowest level of racial resentment. The dark error bars indicate 83.4% confidence intervals, to help compare estimates to each other. The lighter, longer error bars are 95% confidence intervals, which are more appropriate for comparing as estimate to a given number such as zero.

The plotted outcome is whether the participant indicated that the defendant was guilty or not guilty. The -29% for the top estimate indicates that, among White participants who had the lowest level of racial resentment on this index, the percentage that rated the Black defendant guilty was 29 percentage points lower than the percentage that rated the White defendant guilty.

The plot below reports results among White participants who might be expected to have a pro-White bias. The 26% for the top estimate indicates that, among White participants who had the highest level of racial resentment on this index, the percentage that rated the Black defendant guilty was 26 percentage points higher than the percentage that rated the White defendant guilty.

---

The Stata output reports additional results, for the sentence length outcome, and for other predictors: a four-item racial resentment index from the post-election survey, plus individual stereotype items (such as for White participants who rated Blacks higher than Whites on an intelligence scale). Results for the sentence length outcome are reported for all White respondents and, in later analyses, for only those White respondents who indicated that the defendant was guilty.

---

NOTE

1. Data for Rice et al. 2021 from the JOP Dataverse. Original 2018 CCES data for the UMass-A module, which I used in the aforementioned analyses. Stata code. Stata output. Pro-Black plot: dataset and code. Pro-White plot: dataset and code.

Tagged with: , , ,

Forthcoming at the Journal of Politics is Rice et al. 2021 "Same As It Ever Was? The Impact of Racial Resentment on White Juror Decision-Making".

---

See the prior post describing the mock juror experiment in Rice et al. 2021.

The Rice et al. 2021 team kindly cited my article questioning racial resentment as a valid measure of racial animus. But Rice et al. 2021 interpreted their results as evidence for the validity of racial resentment:

Our results also suggest that racial resentment is a valid measure of racial animus (Jardina and Piston 2019) as it performs exactly as expected in an experimental setting manipulating the race of the defendant.

However, my analyses of the Rice et al. 2021 data indicated that a measure of sexism sorted White participants by their propensity to discriminate for Bradley Schwartz or Jamal Gaines:

I don't think that the evidence in the above plot indicates that sexism is a valid measure of racial animus, so I'm not sure that racial resentment sorting White participants by their propensity to discriminate for Bradley or Jamal means that racial resentment is a valid measure of racial animus, either.

---

I think that the best two arguments against racial resentment as a measure of anti-Black animus are:

[1] Racial resentment on its face plausibly captures non-racial attitudes, and it is not clear that statistical control permits any post-statistical control residual association of racial resentment with an outcome to be interpreted as anti-Black animus, given that racial resentment net of statistical control often predicts outcomes that are not theoretically linked to racial attitudes.

[2] Persons at low levels of racial resentment often disfavor Whites relative to Blacks (as reported in this post and in the Rice et al. 2021 mock juror experiment), so the estimated effect for racial resentment cannot be interpreted as only the effect of anti-Black animus. Racial resentment in these cases appears to sort to low levels of racial resentment a sufficient percentage of respondents who dislike Whites in absolute or at least relative terms, so that indifference to Whites might plausibly be better represented at some location between the ends of the racial resentment measure. But the racial resentment measure does not have a clear indifference point such as 50 on a 0-to-100 feeling thermometer rating, so -- even if argument [1] is addressed so that statistical control isolates the effect of racial attitudes -- it's not clear how racial resentment could be used to accurately estimate the effect of only anti-Black animus.

---

NOTES

1. The sexism measure used responses to the items below, which loaded onto one factor among White participants in the data:

[UMA306bSB] We should do all we can to make sure that women have the same opportunities in society as men.

[UMA306c] We would have fewer problems if we treated men and women more equally.

[UMA306f] Many women are actually  seeking special favors, such as hiring policies that favor them over men, under the guise of asking for "equality."

[UMA306g] Women are too easily offended.

[UMA306h] Men are better suited for politics than are women.

[CC18_422c] When women lose to men in a fair competition, they typically complain about being discriminated against.

[CC18_422d] Feminists are making entirely reasonable demands of men.

Responses to these items loaded onto a different factor:

[UMA306d] Women should be cherished and protected by men.

[UMA306e] Many women have a quality of purity that few men possess.

2. Data for Rice et al. 2021 from the JOP Dataverse. Original 2018 CCES data for the UMass-A module, which I used in the aforementioned analyses. Stata code. Stata output. Data and code for the sexism plot.

3. I plan a follow-up post about how well different measures predicted racial bias in the experiment.

Tagged with: , , ,

Forthcoming at the Journal of Politics is Rice et al. 2021 "Same As It Ever Was? The Impact of Racial Resentment on White Juror Decision-Making". In contrast to the forthcoming Peyton and Huber 2021 article at the Journal of Politics that I recently blogged about, Rice et al. 2021 reported evidence that racial resentment predicted discrimination among Whites.

---

Rice et al. 2021 concerned a mock juror experiment regarding an 18-year-old starting point guard on his high school basketball team who was accused of criminal battery. Participants indicated whether the defendant was guilty or not guilty and suggested a prison sentence length from 0 to 60 months for the defendant. The experimental manipulation was that the target was randomly assigned to be named Bradley Schwartz or Jamal Gaines.

Section 10 of the Rice et al. 2021 supplementary material has nice plots of the estimated discrimination at given levels of racial resentment, indicating, for the guilty outcome, that White participants at low racial resentment were less likely to indicate that Jamal was guilty compared to Bradley, but that White participants at high racial resentment were more likely to indicate that Jamal was guilty compared to Bradley. Results were similar for the sentence length outcome, but the 95% confidence interval at high racial resentment overlaps zero a bit.

---

The experiment did not detect sufficient evidence of racial bias among White participants as a whole. But what about Black participants? Results indicated a relatively large favoring of Jamal over Bradley among Black participants, in unweighted data (N=41 per condition). For guilt, the bias was 29 percentage points in unweighted analyses, and 33 percentage points in weighted analyses. For sentence length, the bias was 8.7 months in unweighted analyses, and 9.4 months in weighted analyses, relative to a unweighted standard deviation of 16.1 months in sentence length among Black respondents.

Results for the guilty/not guilty outcome:

Results for the mean sentence length outcome:

The p-value was under p=0.05 for my unweighted tests of whether the size of the discrimination among Whites (about 7 percentage points for guilty, about 1.3 months for sentence length) differed from the size of the discrimination among Blacks (about 29 percentage points for guilty, about 8.7 months for sentence length); the inference is the same for weighted analyses. The evidence is even stronger considering that the point estimate of discrimination among Whites was in the pro-Jamal direction and not in the pro-ingroup direction.

---

NOTES

1. Data for Rice et al. 2021 from the JOP Dataverse. Original 2018 CCES data for the UMass-A module, which I used in the aforementioned analyses. Stata code. Stata output. "Guilty" plot: data and R code. "Sentence length" plot: data and R code.

2. I plan to publish a follow-up post about evidence for validity of racial resentment from the Rice et al. 2021 results, plus a follow-up post about how well different measures predicted racial bias in the experiment.

Tagged with: , , ,

Forthcoming in the Journal of Politics is Peyton and Huber 2021 "Racial Resentment, Prejudice, and Discrimination". Study 1 estimated discrimination among White MTurk workers playing a game with a White proposer or a Black proposer. The abstract indicated that:

Study 1 used the Ultimatum Game (UG) to obtain a behavioral measure of racial discrimination and found whites engaged in anti-Black discrimination. Explicit prejudice explained which whites discriminated whereas resentment did not.

I didn't see an indication in the paper about a test for whether explicit prejudice predicted discrimination against Blacks better than racial resentment did. I think that the data had 173 workers coded non-White and and 20 workers with missing data on the race variable, but Peyton and Huber 2021 reported results for only White workers, so I'll stick with that and limit my analysis to reflect their analysis in Table S1.1, which is labeled in their code as "main analysis".

My analysis indicated that the discrimination against Black proposers was 2.4 percentage points among White workers coded as prejudiced (p=0.004) and 1.3 percentage points among White workers coded as high in racial resentment (p=0.104), with a p-value of p=0.102 for a test of whether these estimates differ from each other.

---

The Peyton and Huber 2021 sorting into a prejudiced group or a not-prejudiced group based on responses to the stereotype scales permits assessment of whether the stereotype scales sorted workers by discrimination propensities, but I was also interested in the extent to which the measure of prejudice detected discrimination because the non-prejudiced comparison category included Whites who reported more negative stereotypes of Whites relative to Blacks, on net. My analysis indicated that point estimate for discrimination was:

* 2.4 percentage points against Blacks (p=0.001), among White workers who rated Blacks more negatively than Whites on net on the stereotype scales,

* 0.9 percentage points against Blacks (p=0.173), among White workers who rated Blacks equal to Whites on net on the stereotype scales, and

* 1.8 percentage points in favor of Blacks (p=0.147), among White workers who rated Blacks more positively than Whites on net on the stereotype scales.

The p-value for the difference between the 2.4 percentage point estimate and the 0.9 percentage point estimate is p=0.106, and the p-value for the difference between the 0.9 percentage point estimate and the -1.8 percentage point estimate is also p=0.106.

---

NOTES

1. I have blogged about measuring "prejudice". The Peyton and Huber 2021 definition of prejudice is not bad:

Prejudice is a negative evaluation of another person based on their group membership, whereas discrimination is a negative behavior toward that person (Dovidio and Gaertner, 1986).

But I don't think that this is how Peyton and Huber 2021 measured prejudice. I think that instead a worker was coded as prejudiced for reporting a more negative evaluation about Blacks relative to Whites, on net for the four traits that workers were asked about. That's a *relatively* more negative perception of a *group*, not a negative evaluation of an individual person based on their group.

2. Peyton and Huber 2021 used an interaction term to compare discrimination among White workers with high racial resentment to discrimination among residual White workers, and used an interaction term to compare discrimination among White workers explicitly prejudiced against Blacks relative to Whites to discrimination among residual White workers.

Line 77 of the Peyton and Huber code tests whether, in a model including both interaction terms for the "Table S1.1, main analysis" section, the estimated discrimination gap differed between the prejudice categories and the racial resentment categories. The p-value was p=0.0798 for that test.

3. Data. Stata code for my analysis. Stata output for my analysis.

Tagged with: ,