Some research includes measures of attitudes about certain groups but not about obvious comparison groups, such as research that includes attitudes about Blacks but not Whites or includes attitudes about women but not men. Feeling thermometers can help avoid this, which I'll illustrate with data from the Democracy Fund Voter Study Group's Views of the Electorate Research (VOTER) Survey.

---

The outcome is this item, from the 2018 wave of the VOTER survey:

Do you approve or disapprove of football players protesting by kneeling during the national anthem?

I coded responses 1 for strongly approve and somewhat approve and 0 for somewhat disapprove, strongly disapprove, don't know, and skipped. The key predictor was measured in 2017 and is based on 0-to-100 feeling thermometer ratings about Blacks and Whites, coded into six categories:

* Rated Whites equal to Blacks

---

* Rated Whites under 50 and Blacks at 50 or above

* Residual ratings of Whites lower than Blacks

---

* Rated Blacks under 50 and Whites at 50 or above

* Residual ratings of Blacks lower than Whites

---

* Did not rate Whites and/or Blacks

The plot below controls for only participant race measured in 2018, with error bars indicating 83.4% confidence intervals and survey weights applied.

The plot suggests that attitudes about anthem protests associated with negative attitudes about Blacks and with negative attitudes about Whites. These are presumably obvious results, but measures such as racial resentment probably won't be interpreted as suggesting both results.

---

NOTE

1. Stata code and output. The output reports results that had more extensive statistical control.

Tagged with: , , , ,

Suppose that Bob at time 1 believes that Jewish people are better than every other group, but Bob at time 2 changes his belief to be that Jewish people are no better or worse than every other group, and Bob at time 3 changes his belief to be that Jewish people are worse than every other group.

Suppose also that these changes in Bob's belief about Jewish people have a causal effect on his vote choices. Bob at time 1 will vote 100% of the time for a Jewish candidate running against a non-Jewish candidate, no matter the relative qualifications of the candidates. At time 2, a candidate's Jewish identity is irrelevant to Bob's vote choice, so that, if given a choice between a Jewish candidate and an all-else-equal non-Jewish candidate, Bob will flip a coin and vote for the Jewish candidate only 50% of the time. Bob at time 3 will vote 0% of the time for a Jewish candidate running against a non-Jewish candidate, no matter the relative qualifications of the candidates.

Based on this setup, what is your estimate of the influence of antisemitism on Bob's voting decisions?

---

I think that the effect of antisemitism is properly understood as the effect of negative attitudes about Jewish people, so that the effect can be estimated in the above setup as the difference between Bob's voting decisions at time 2, when Bob is indifferent to a candidate's Jewish identity, and Bob's voting decisions at time 3, when Bob has negative attitudes about Jewish people. Thus, the effect of antisemitism on Bob's voting decisions is a 50 percentage point decrease, from 50% to 0%.

For the first decrease, from 100% to 50%, neither belief -- the belief that Jewish people are better than every other group, or the belief that Jewish people are no better or worse than every other group -- is antisemitic, so none of this decrease should be attributed to antisemitism. Generally, I think that this means that respondents who have positive attitudes about a group should not be used to estimate the effect of negative attitudes about that group.

---

So let's discuss the Race and Social Problems article: Sharrow et al 2021 "What's in a Name? Symbolic Racism, Public Opinion, and the Controversy over the NFL's Washington Football Team Name". The key predictor was a measure of resentment against Native Americans, built from responses to the statements below, measured on a 5-point scale from "strongly agree" to "strongly disagree":

Most Native Americans work hard to make a living just like everyone else.

Most Native Americans take unfair advantage of privileges given to them by the government.

My analysis indicates that 39% of the 1500 participants (N=582) provided consistently positive responses about Native Americans on both items, agreeing or strongly agreeing with the first statement and disagreeing or strongly disagreeing with the second statement. I don't see why these 582 respondents should be included in an analysis that attempts to estimate the effect of negative attitudes about Native Americans, if these participants do not fall along the indifferent-to-negative-attitudes continuum about Native Americans.

So let's check what happens after removing these respondents from the analysis.

---

I first conducted an unweighted OLS regression using the full sample and controls to predict the summary Team Name Index outcome, which measured support for the Washington football team's name placed on a 0-to-1 scale. For this regression (N=1024), the measure of resentment against Native Americans ranged from 0 for respondents who selected the most positive responses to both resentment items to 1 for respondents who selected the most negative responses to both resentment items. In this regression, the coefficient was 0.26 (t=6.31) for resentment against Native Americans.

I then removed respondents who provided positive responses about Native Americans for both resentment items. For this next unweighted OLS regression (N=572), the measure of resentment against Native Americans still had a value of 1 for respondents who provided the most negative responses to both resentment items; however, 0 was for participants who were neutral on one resentment item but provided the most positive response on the other resentment item, such as strongly agreeing that "Most Native Americans work hard to make a living just like everyone else" but neither agreeing or disagreeing that "Most Native Americans take unfair advantage of privileges given to them by the government". In this regression, the coefficient was 0.12 (t=2.23) for resentment against Native Americans.

The drop is similar when the regressions include only the measure of resentment against Native Americans and no other predictors: the coefficient is 0.44 for the full sample, but is 0.22 after dropping respondents who provided positive responses about Native Americans for both resentment items.

---

So I think that Sharrow et al 2021 might report substantial overestimates of the effect of resentment of Native Americans, because the estimates in Sharrow et al 2021 about the effect of negative attitudes about Native Americans included the effect of positive attitudes about Native Americans.

---

NOTES

1. About 20% of the Sharrow et al 2022 sample reported a negative attitude on at least one of the two measures of resentment against Native Americans. About 6% of the sample reported a negative attitude on both measures of resentment against Native Americans.

2. Sharrow et al 2021 indicated that "Our conclusions illustrate that symbolic racism toward Native Americans is central to interpreting the public's resistance toward changing the name, in sharp contrast to Snyder's claim that the name is about 'respect.'" (p. 111).

For what it's worth, the Sharrow et al 2021 data indicate that a nontrivial percentage of respondents with positive views of Native Americans somewhat or strongly disagreed with the claim that Washington football team name is offensive (in an item that reported the name of the team at the time): 47% of respondents who provided positive responses about Native Americans for both resentment items, 47% of respondents who rated Native Americans at 100 on a 0-to-100 feeling thermometer, 40% of respondents who provided positive responses about Native Americans for both resentment items and rated Native Americans at 100 on a 0-to-100 feeling thermometer, and 32% of respondents who provided the most positive responses about Native Americans for both resentment items and rated Native Americans at 100 on a 0-to-100 feeling thermometer (although this 32% was only 22% in unweighted analyses).

3. Sharrow et a 2021 indicated a module sample of 1,500 but the sample size fell to 1,024 in model 3 of Table 1. My analysis indicates that this is largely due to missing values on the outcome variable (N=1,362), the NFL sophistication index (N=1,364), and the measure of resentment of Native Americans (N=1,329).

4. Data for my analysis. Stata code and output.

5. Social Science Quarterly recently published Levin et al 2022 "Validating and testing a measure of anti-semitism on support for QAnon and vote intention for Trump in 2020", which also has the phenomenon of estimating the effect of negative attitudes about a target group but not excluding participants who favor the target group.

Tagged with: , , , , ,

1.

In 2003, Melissa V. Harris-Lacewell wrote that (p. 222):

The defining works of White racial attitudes fail to grapple with the complexities of African American political thought and life. In these studies, Black people are a static object about which White people form opinions.

Researchers still sometimes make it difficult to analyze data from Black participants or don't report interesting data on Black participants. Helping to address this, Darren W. Davis and David C. Wilson have a new book Racial Resentment in the Political Mind (RRPM), with an entire chapter on African Americans' resentment toward Whites.

RRPM is a contribution to research on Black political attitudes, and its discussion of measurement of Whites' resentment toward Blacks is nice, especially for people who don't realize that standard measures of "racial resentment" aren't good measures of resentment. But let me discuss some elements of the book that I consider flawed.

---

2.

RRPM draws, at a high level, a parallel between Whites' resentment toward Blacks and Blacks' resentment toward Whites (p. 242):

In essence, the same model of a just world and appraisal of deservingness that guides Whites' racial resentment also guides African Americans' racial resentment.

That seems reasonable, to have the same model for resentment toward Whites and resentment toward Blacks. But RRPM proposes different items for a battery of resentment toward Blacks and for a battery of resentment toward Whites, and I think that different batteries for each type of resentment will undercut comparison of the size of the effects of these two different resentments, because one battery might capture true resentment better than another battery.

Thus, especially for general surveys such as the ANES that presumably can't or won't devote space to batteries measuring resentments tailored to each racial group, it might be better to measure resentment toward various groups with generalizable items such as agreement/disagreement to statements such as "Whites have gotten more than they deserve" and "Blacks have gotten more than they deserve", which hopefully would produce more valid comparisons of the estimated effect of resentments toward different groups, compared to comparison of batteries of different items.

---

3.

RRPM suggests that all resentment batteries not be given to all respondents (p. 241):

A clear outcome of this chapter is that African Americans should not be presented the same classic racial resentment survey items that Whites would answer (and perhaps vice versa)...

And from page 30:

African Americans and Whites have different reasons to be resentful toward each other, and each group requires a unique set of measurement items to capture resentment.

But not giving participants items measuring resentment of their own racial group doesn't seem like a good idea, because a White participant could think that Whites have received more than they deserve on average, and a Black participant could think that Blacks have received more than they deserve on average, so that omitting White resentment of Whites and similar measures could plausibly bias estimates of the effect of resentment, if resentment of one's own racial group influences a participant's attitudes about political phenomena.

---

RRPM discusses asking Blacks to respond to racial resentment items toward Blacks: "No groups other than African Americans seem to be asked questions about self-hate" (p. 249). RRPM elsewhere qualifies this with "rarely": "That is, asking African Americans to answer questions about disaffection toward their own group is a task rarely asked of other groups"  (p. 215).

The ANES 2016 pilot study did ask White participants about White guilt (e.g., "How guilty do you feel about the privileges and benefits you receive as a white American?") without asking any other racial groups about parallel guilt. Moreover, the CCES had (in 2016 and 2018 at least) an agree/disagree item asked of Whites and others that "White people in the U.S. have certain advantages because of the color of their skin", with no equivalent item about color-of-skin advantages for people who are not White.

But even if Black participants disproportionately receive resentment items directed at Blacks, the better way to address this inequality and to understand racial attitudes is to add resentment items directed at other groups.

---

4.

RRPM seems to suggest an asymmetry in that only Whites' resentment is normatively bad (p. 25):

In the end, African Americans' quest for civil rights and social justice is resented by Whites, and Whites' maintenance of their group dominance is resented by African Americans.

Davis and Wilson discussed RRPM in a video on the UC Public Policy Channel, with Davis suggesting that "a broader swath of citizens need to be held accountable for what they believe" (at 6:10) and that "...the important conversation we need to have is not about racists. Okay. We need to understand how ordinary American citizens approach race, approach values that place them in the same bucket as racists. They're not racists, but they support the same thing that racists support" (at 53:37).

But, from what I can tell, the ordinary American citizens in the same bucket as racists don't seem to be, say, people who support hiring preferences for Blacks for normatively good reasons and just happen to have the same policy preferences as people who support hiring preferences for Blacks because of racism against Whites. Instead, my sense is that the racism in question is limited to racism that causes racial inequality: David C. Wilson at 3:24 in the UC video:

And so, even if one is not racist, they can still exacerbate racial injustice and racial inequality by focusing on their values rather than the actual problem and any solutions that might be at bay to try and solve them.

---

Another apparent asymmetry is that RRPM mentions legitimizing racial myths throughout the book (vii, 3, 8, 21, 23, 28, 35, 47, 48, 50, 126, 129, 130, 190, 243, 244, 247, 261, 337, and 342), but legitimizing racial myths are not mentioned in the chapter on African Americans' resentment toward Whites (pp. 214-242). RRPM page 8 figure 1.1 is model of resentment that has an arrow from legitimizing racial myths to resentment, but RRPM doesn't indicate what, if any, legitimizing racial myths inform resentment toward Whites.

Legitimizing myths are conceptualized on page 8 as follows:

Appraisals of deservingness are shaped by legitimizing racial myths, which are widely shared beliefs and stereotypes about African Americans and other minorities that justify their mistreatment and low status. Legitimizing myths are any coherent set of socially accepted attitudes, beliefs, values, and opinions that provide moral and intellectual legitimacy to the unequal distribution of social value (Sidanius, Devereux, and Pratto 1992).

But I don't see why legitimizing myths couldn't add legitimacy to unequal *treatment*. Presumably resentment flows from beliefs about the causes of inequality, so Whites as a/the main/the only cause of Black/White inequality could serve as a belief that legitimizes resentment toward Whites and, consequently, discrimination against Whites.

---

5.

The 1991 National Race and Politics Survey had a survey experiment, asking for agreement/disagreement to the item:

In the past, the Irish, the Italians, the Jews and many other minorities overcame prejudice and worked their
way up.

Version 1: Blacks...
Version 2: New immigrants from Europe...

...should do the same without any special favors?

This experiment reflects the fact that responses to items measuring general phenomena applied to a group might be influenced by the general phenomena and/or the group.

Remarkably, the RRPM measurement of racial schadenfreude (Chapter 7) does not address this ambiguity, with items measuring participant feelings about only President Obama, such as the schadenfreude felt by "Barack Obama's being identified as one of the worst presidents in history". At least RRPM realizes this (p. 206):

Without a more elaborate research design, we cannot really determine whether the schadenfreude experienced by Republicans is due to his race or to some other issue.

---

6.

For an analysis of racial resentment in the political mind, RRPM remarkably doesn't substantively consider Asians, even if only as a target of resentment to help test alternate explanations about the cause of resentment, given that, like Whites, Asians on average have relatively positive outcomes in income and related measures, but do not seem to be blamed for U.S. racial inequality as much as Whites are.

---

NOTES

1. From RRPM (p. 241):

When items designed on one race are automatically applied to another race under the assumption of equal meaning, it creates measurement invariance.

Maybe the intended meaning is something such as "When items designed on one race are automatically applied to another race, it assumes measurement invariance".

2. RRPM Figure 2.1 (p. 68) reports how resentment correlates with feeling thermometer ratings about Blacks and with feeling thermometer ratings about Whites, but not with the more intuitive measure of the *difference* in feeling thermometer ratings about Blacks and about Whites.

Tagged with: , , , ,

The recent Rhodes et al 2022 Monkey Cage post indicated that:

...as [Martin Luther] King [Jr.] would have predicted, those who deny the existence of racial inequality are also those who are most willing to reject the legitimacy of a democratic election and condone serious violations of democratic norms.

Regarding this inference about the legitimacy of a democratic election, Rhodes et al 2022 reported results for an item that measured perceptions about the legitimacy of Joe Biden's election as president in 2020. But a potential confound is that reported perceptions of the legitimacy of the U.S. presidential election in 2020 are due to who won that election and are not about elections per se. One way to address this confound is to use a measure of reported perceptions of the legitimacy of the U.S. presidential election *in 2016*, which Donald Trump won.

I checked data from the Democracy Fund Voter Study Group VOTER survey for responses to the items below, which can help address this confound:

[from 2016 and 2020] Over the past few years, Blacks have gotten less than they deserve.

[from 2016] How confident are you that the votes in the 2016 election across the country were accurately counted?

[from 2020] How confident are you that votes across the United States were counted as voters intended in the elections this November?

Results are below:

The dark columns are for respondents who strongly disagreed that Blacks have gotten less than they deserve, so that these respondents can plausibly be described as denying the existence of unfair racial inequality. The light columns are for respondents who strongly agreed that Blacks have gotten less than they deserve, so that these respondents can plausibly be described as most strongly asserting the existence of unfair racial inequality.

Comparison of the 2020 column for "strongly disagree" to the 2020 column for "strongly agree" suggests that, as expected based on Rhodes et al 2022, skepticism about votes in 2020 being counted accurately was more common among respondents who most strongly denied the existence of unfair racial inequality than among respondents who most strongly asserted the existence of unfair racial inequality.

But comparison of the 2016 column for "strongly disagree" to the 2016 column for "strongly agree" suggests that the general phrasing of "those who deny the existence of racial inequality are also those who are most willing to reject the legitimacy of a democratic election" does not hold for every election, such as the presidential election immediately prior to the election that was the focus of the relevant item in Rhodes et al 2022.

---

NOTE

1. Data source. Stata do file. Stata output. Code for the R plot.

Tagged with: , , ,

Social Forces published Wetts and Willer 2018 "Privilege on the Precipice: Perceived Racial Status Threats Lead White Americans to Oppose Welfare Programs", which indicated that:

Descriptive statistics suggest that whites' racial resentment rose beginning in 2008 and continued rising in 2012 (figure 2)...This pattern is consistent with our reasoning that 2008 marked the beginning of a period of increased racial status threat among white Americans that prompted greater resentment of minorities.

Wetts and Willer 2018 had analyzed data from the American National Election Studies, so I was curious about the extent to which the rise in Whites' racial resentment might be due to differences in survey mode, given evidence from the Abrajano and Alvarez 2019 study of ANES data that:

We find that respondents tend to underreport their racial animosity in interview-administered versus online surveys.

---

I didn't find a way to reproduce the exact results from Wetts and Willer 2018 Supplementary Table 1 for the rise in Whites' racial resentment, but, like in that table, my analysis controlled for gender, age, education, employment status, marital status, class identification, income, and political ideology.

Using the ANES Time Series Cumulative Data File with weights for the full samples, my analysis detected p<0.05 evidence of a rise in Whites' mean racial resentment from 2008 to 2012, which matches Wetts and Willer 2018; this holds net of controls and without controls. But the p-values were around p=0.22 for the change from 2004 to 2008.

But using weights for the full samples compares respondents in 2004 and in 2008 who were only in the face-to-face mode, with respondents in 2012, some of whom were in the face-to-face mode and some of whom were in the internet mode.

Using weights only for the face-to-face mode, the p-value was not under p=0.25 for the change in Whites' mean racial resentment from 2004 to 2008 or from 2008 to 2012, net of controls and without controls. The point estimates for the 2008-to-2012 change were negative, indicating, if anything, a drop in Whites' mean racial resentment.

---

NOTES

1. For what it's worth, the weighted analyses indicated that Whites' mean racial resentment wasn't higher in 2008, 2012, or 2016, relative to 2004, and there was evidence at p<0.05 that Whites' mean racial resentment was lower in 2016 than in 2004.

2. Abrajano and Alvarez 2019 discussing their Table 2 results for feeling thermometers ratings about groups indicated that (p. 263):

It is also worth noting that the magnitude of survey mode effects is greater than those of political ideology and gender, and nearly the same as partisanship.

I was a bit skeptical that the difference in ratings about groups such as Blacks and illegal immigrants would be larger by survey mode than by political ideology, so I checked Table 2.

The feeling thermometer that Abrajano and Alvarez 2019 discussed immediately before the sentence quoted above involved illegal immigrants; that analysis had coefficients of -2.610 for internet survey mode, but coefficients of 6.613 for Liberal, -1.709 for Conservative, 6.405 for Democrat, and -8.247 for Republican. So the liberal/conservative difference is 8.322 and the Democrat/Republican difference is 14.652, compared to the survey mode difference is -2.610.

3. Dataset: American National Election Studies. 2021. ANES Time Series Cumulative Data File [dataset and documentation]. November 18, 2021 version. www.electionstudies.org

4. Data, code, and output for my analysis.

Tagged with: , , , , ,

Criminology recently published Schutten et al 2021 "Are guns the new dog whistle? Gun control, racial resentment, and vote choice".

---

I'll focus on experimental results from Schutten et al 2021 Figure 1. Estimates for respondents low in racial resentment indicated a higher probability of voting for a hypothetical candidate:

[1] when the candidate was described as Democrat, compared to when the candidate was described as a Republican,

[2] when the candidate was described as supporting gun control, compared to when the candidate was described as having a policy stance on a different issue, and

[3] when the candidate was described as not being funded by the NRA, compared to when the candidate was described as being funded by the NRA.

Patterns were reversed for respondents high in racial resentment. The relevant 95% confidence intervals did not overlap for five of the six patterns, with the exception being for the NRA funding manipulation among respondents high in racial resentment; eyeballing, it doesn't look like the p-value is under p=0.05 for that estimated difference.

---

For the estimate that participants low in racial resentment were less likely to vote for a hypothetical candidate described as being funded by the NRA than for a hypothetical candidate described as not being funded by the NRA, Schutten et al 2021 suggested that this might reflect a backlash against of "the use of gun rights rhetoric to court prejudiced voters" (p. 20). But, presuming that the content of the signal provided by the mention of NRA funding is largely or completely racial, the "backlash" pattern is also consistent with a backlash against support of a constitutional right that many participants low in racial resentment might perceive to be disproportionately used by Whites and/or rural Whites.

Schutten et al 2021 conceptualized participants low in racial resentment as "nonracists" (p. 3) and noted that "recent evidence suggests that those who score low on the racial resentment scale 'favor' Blacks (Agadjanian et al., 2021)" (p. 21), but I don't know why the quotation marks around "favor" are necessary, given that there is good reason to characterize a nontrivial percentage of participants low in racial resentment as biased against Whites: for example, my analysis of data from the ANES 2020 Time Series Study indicated that about 40% to 45% of Whites (and about 40% to 45% of the general population) that fell at least one standard deviation under the mean level of racial resentment rated Whites lower on the 0-to-100 feeling thermometers than they rated Blacks, and Hispanics, and Asians/Asian-Americans. (This is not merely respondents rating Whites on average lower than Blacks, Hispanics, and Asians/Asian-Americans, but is rating Whites lower than each of these three groups).

Schutten et al 2021 indicated that (p. 4):

Importantly, dog whistling is not an attempt to generate racial prejudice among the public but to arouse and harness latent resentments already present in many Americans (Mendelberg, 2001).

Presumably, this dog whistling can activate the racial prejudice against Whites that many participants low in racial resentment have been comfortable expressing on feeling thermometers.

---

NOTES

1. Schutten et al 2021 claimed that (p. 8):

If racial resentment is primarily principled conservatism, its effect on support for government spending should not depend on the race of the recipient.

But if racial resentment were, say, 70% principled ideology and 30% racial prejudice, racial resentment should still associate with racial discrimination due to the 30%.

And I think that it's worth considering whether racial resentment should also be described as being influenced by progressive ideology. If principled conservatism can cause participants to oppose special favors for Blacks, presumably a principled progressivism can cause participants to support special favors for Blacks. If so, it seems reasonable to also conceptualize racial resentment as the merging of principled progressivism and prejudice against Whites, given that both could presumably cause support for special favors for Blacks.

2. Schutten et al 2021 claimed that (p. 16):

The main concern about racial resentment is that it is a problematic measure of racial prejudice among conservatives but a suitable measure among nonconservatives (Feldman & Huddy, 2005).

But I think that major concerns about racial resentment are present even among nonconservatives. As I indicated in a prior blog post, I think that the best case against racial resentment has two parts. First, racial resentment captures racial attitudes in a way that is difficult if not impossible to disentangle from nonracial attitudes; that concern remains among nonconservatives, such as the possibility that a nonconservative would oppose special favors for Blacks because of a nonracial opposition to special favors.

Second, many persons at low racial resentment have a bias against Whites, and limiting the sample to nonconservatives if anything makes it more likely that the estimated effect of racial resentment is capturing the effect of bias against Whites.

3. Figure 1 would have provided stronger evidence about p<0.05 differences between estimates if plotting 83.4% confidence intervals.

4. [I deleted this comment because Justin Pickett (co-author on Schutten et al 2021) noted in review of a draft version of this post that this comment suggested an analysis that was reported in Schutten et al 2021, that an analysis be limited to participants low in racial resentment and an analysis be limited to participants high in racial resentment. Thanks to Justin for catching that.]

5. Data source for my analysis: American National Election Studies. 2021. ANES 2020 Time Series Study Preliminary Release: Combined Pre-Election and Post-Election Data [dataset and documentation]. July 19, 2021 version. www.electionstudies.org.

Tagged with: , , , ,

See here for a discussion of the Rice et al. 2021 mock juror experiment.

My reading of the codebook for the Rice et al. 2021 experiment is that, among other items, the pre-election survey included at least one experiment (UMA303_rand), then a battery of items measuring racism and sexism, and then at least another experiment. Then, among other items, the post-election survey included the CCES Common Content racial resentment and FIRE items, and then the mock juror experiment.

The pre-election battery of items measuring racism and sexism included three racial resentment items, a sexism battery, three stereotypes about Blacks and Whites (laziness, intelligence, and violent), and 0-to-100 feeling thermometers about Whites and about Blacks. In this post, I'll report some analyses of how well these pre-election measures predicted discrimination in the Rice et al. 2021 mock juror experiment.

---

The first plot reports results among White participants who might be expected to have a pro-Black bias. For example, the first estimate is for White participants who had the lowest level of racial resentment. The dark error bars indicate 83.4% confidence intervals, to help compare estimates to each other. The lighter, longer error bars are 95% confidence intervals, which are more appropriate for comparing as estimate to a given number such as zero.

The plotted outcome is whether the participant indicated that the defendant was guilty or not guilty. The -29% for the top estimate indicates that, among White participants who had the lowest level of racial resentment on this index, the percentage that rated the Black defendant guilty was 29 percentage points lower than the percentage that rated the White defendant guilty.

The plot below reports results among White participants who might be expected to have a pro-White bias. The 26% for the top estimate indicates that, among White participants who had the highest level of racial resentment on this index, the percentage that rated the Black defendant guilty was 26 percentage points higher than the percentage that rated the White defendant guilty.

---

The Stata output reports additional results, for the sentence length outcome, and for other predictors: a four-item racial resentment index from the post-election survey, plus individual stereotype items (such as for White participants who rated Blacks higher than Whites on an intelligence scale). Results for the sentence length outcome are reported for all White respondents and, in later analyses, for only those White respondents who indicated that the defendant was guilty.

---

NOTE

1. Data for Rice et al. 2021 from the JOP Dataverse. Original 2018 CCES data for the UMass-A module, which I used in the aforementioned analyses. Stata code. Stata output. Pro-Black plot: dataset and code. Pro-White plot: dataset and code.

Tagged with: , , ,