Social Science Quarterly recently published Cooper et al. 2021 "Heritage Versus Hate: Assessing Opinions in the Debate over Confederate Monuments and Memorials". The conclusion of the article notes that:

...we uncover significant evidence that the debate over Confederate monuments can be resoundingly summarized as "hate" over "heritage"

---

In a prior post, I noted that:

...when comparing the estimated effect of predictors, inferences can depend on how well each predictor is measured, so such analyses should discuss the quality of the predictors.

Cooper et al. 2021 measured "heritage" with a dichotomous predictor and measured "hate" with a five-level predictor, and this difference in the precision of the measurements could have biased their research design toward a larger estimate for hate than for heritage. [See note 3 below for a discussion].

I'm not suggesting that the entire difference between their estimates for heritage and hate is due to the number of levels of the predictors, but I think that a better peer review would have helped eliminate that flaw in the research design, maybe by requiring the measure of hate to be dichotomized as close as possible to 70/30 like the measure of heritage was.

---

Here is the lone measure of heritage used in Cooper et al. 2021:

"Do you consider yourself a Southerner, or not?"

Table 1 of the article indicates that 70% identified as a Southerner, so even if this were a face-valid measure of Southern heritage, the measure places into the highest level of Southern heritage persons at the 35th percentile of Southern heritage.

Maybe there is more recent data that undercuts this, but data from the Spring 2001 Southern Focus Poll indicated that only about 1 in 3 respondents who identified as a Southerner indicated that being a Southerner was "very important" to them. About 1 in 3 respondents who identified as a Southerner in that 2001 poll indicated that being a Southerner was "not at all important" or "not very important" to them, and I can't think of a good reason why, without other evidence, these participants belong in the highest level of a measure of Southern heritage.

---

Wright and Esses 2017 had a more precise measure for heritage and found sufficient evidence to conclude that (p. 232):

Positive attitudes toward the Confederate battle flag were more strongly associated with Southern pride than with racial attitudes when accounting for these covariates.

How does Cooper et al. 2021 address the Wright and Esses 2017 result, which conflicts with the result from Cooper et al. 2021 and which used a related outcome variable and a better measure of heritage? The Cooper et al. 2021 article doesn't even mention Wright and Esses 2017.

---

A better peer review might have caught the minimum age of zero years old in Table 1 and objected to the description of "White people are currently under attack in this country" as operationalizing "racial resentment toward blacks" (pp. 8-9), given that this item doesn't even mention or refer to Blacks. I suppose that respondents who hate White people would be reluctant to agree that White people are under attack regardless of whether that is true. But that's not the "hate" that is supposed to be measured.

Estimating the effect of "hate" for this type of research should involve comparing estimates net of controls for respondents who have a high degree of hate for Blacks to respondents who are indifferent to Blacks. Such estimates can be biased if the estimates instead include data from respondents who have more negative feelings about Whites than about Blacks. In a prior post, I discussed Carrington and Strother 2020, which measured hate with a Black/White feeling thermometer difference and thus permitted estimation of how much of the effect of hate is due to respondents rating Blacks higher than Whites on the feeling thermometers.

---

Did Cooper et al. have access to better measures of hate than the item "White people are currently under attack in this country"? The Winthrop Poll site didn't list the Nov 2017 survey on its archived poll page for 2017. But, from what I can tell, this Winthrop University post discusses the survey, which included a better measure of racial resentment toward blacks. I don't know what information the peer reviewers of Cooper et al. 2021 had access to, but, generally, a journal reform that I would like to see for manuscripts reporting on a survey is for peer reviewers to be given access to the entire set of items for a survey.

---

In conclusion, for a study that compares the estimated effects of heritage and hate, I think that at least three things are needed: a good measure of heritage, a good measure of hate, and the good measure of heritage being of similar quality to the good measure of hate. I don't think that Cooper et al. 2021 has any of those things.

---

NOTES

1. The Spring 2001 Southern Focus Poll study was conducted by the Odum Institute for Research in Social Science of the University of North Carolina at Chapel Hill. Citation: Center for the Study of the American South, 2001, "Southern Focus Poll, Spring 2001", https://hdl.handle.net/1902.29/D-31552, UNC Dataverse, V1.

2. Stata output.

3. Suppose that mean support for leaving Confederate monuments as they are were 70% among the top 20 percent of respondents by Southern pride, 60% among the next 20 percent of respondents by Southern pride, 50% among the middle 20 percent, 40% among the next 20 percent, and 30% among the bottom 20 percent of respondents by Southern pride. And let's assume that these bottom 20 percent are indifferent about Southern pride and don't hate Southerners.

The effect of Southern pride could be estimated at 40 percentage points, which is the difference in support among the top 20 percent and bottom 20 percent by Southern pride. However, if we grouped the top 60 percent together and the bottom 40 percent together, the mean percentage support would respectively be 60% and 35%, for an estimated effect of 25 percentage points. In this illustration, the estimated effect for the five-level predictor is larger than the estimate for the dichotomous predictor, even with the same data.

Here is a visual illustration:

The above is a hypothetical to illustrate the potential bias in measuring one predictor with five levels and another predictor with two levels. I have no idea whether this had any effect on the results reported in Cooper et al. 2021. But, with a better peer review, readers would not need to worry about this type of bias in the Cooper et al. 2021 research design.

Tagged with: , , ,

The plot below reports the mean rating from Whites, Blacks, Hispanics, and Asians of Whites, Blacks, Hispanics, and Asians, using data from the preliminary release of the 2020 ANES Time Series Study.

---

NOTES

1. Data source: American National Election Studies. 2021. ANES 2020 Time Series Study Preliminary Release: Combined Pre-Election and Post-Election Data [dataset and documentation]. March 24, 2021 version. www.electionstudies.org.

2. Stata code. Stata output. R code for the plots. Dataset for the R plot.

Tagged with: ,

The journal Politics, Groups, and Identities recently published Mangum and Block Jr. 2021 "Perceived racial discrimination, racial resentment, and support for affirmative action and preferential hiring and promotion: a multi-racial analysis".

---

The article notes that (p. 13):

Intriguingly, blame [of racial and ethnic minorities] tends to be positively associated with support for preferential hiring and promotion, and, in 2008, this positive relationship is statistically significant for Black and Asian respondents (Table A4; lower right graph in Figure 6). This finding is confounding...

But from what I can tell, this finding might be because the preferential hiring and promotion outcome variable was coded backwards to the intended coding. Table 2 of the article indicates that a higher percentage of Blacks than of Whites, Hispanics, and Asians favored preferential hiring and promotion, but Figures 1 and 2 indicate that a lower percentage of Blacks than of Whites, Hispanics, and Asians favored preferential hiring and promotion.

My analysis of data for the 2004 National Politics Study indicated that the preferential hiring and promotion results in Table 2 are correct for this survey and that blame of racial and ethnic minorities negatively associates with favoring preferential hiring and promotion.

---

Other apparent errors in the article include:

Page 4:

Borrowing from the literature on racial resentment possessed (Feldman and Huddy 2005; Kinder and Sanders 1996; Kinder and Sears 1981)...

Figures 3, 4, 5, and 6:

...holding control variable constant

Page 15:

African Americans, Hispanics, and Asians support affirmative action more than are Whites.

Page 15:

Preferential hiring and promotion is about who deserves special treatment than affirmative action, which is based more on who needs it to overcome discrimination.

Note 2:

...we code the control variables to that they fit a 0-1 scale...

---

Moreover, the article indicates that "the Supreme Court ruled that affirmative action was constitutional in California v. Bakke in 1979", which is not the correct year. And the article seems to make inconsistent claims about affirmative action: "affirmative action and preferential hiring and promotion do not benefit Whites" (p. 15), but "White women are the largest beneficiary group (Crosby et al. 2003)" (p. 13).

---

At least some of these flaws seem understandable. But I think that the number of flaws in this article is remarkably high, especially for a peer-reviewed journal with such a large editorial group: Politics, Groups, and Identities currently lists a 13-member editorial team, a 58-member editorial board, and a 9-member international advisory board.

---

NOTES

1. The article claims that (p. 15):

Regarding all races, most of the racial resentment indicators are significant statistically and in the hypothesized direction. These findings lead to the conclusion that preferential hiring and promotion foster racial thinking more than affirmative action. That is, discussions of preferential hiring and promotion lead Americans to consider their beliefs about minorities in general and African Americans in particular more than do discussions of affirmative action.

However, I'm not sure of how the claim that "preferential hiring and promotion foster racial thinking more than affirmative action" is justified by the article's results regarding racial resentment.

Maybe this refers to the slopes being steeper for the preferential hiring and promotion outcome than for the affirmative action outcome, but it would be a lot easier to eyeball slopes across figures if the y-axes were consistent across figures; instead, the y-axes run from .4 to .9 (Figure 3), .4 to 1 (Figure 4), .6 to 1 (Figure 5), and .2 to 1 (Figure 6).

Moreover, Figure 1 is a barplot that has a y-axis that runs from .4 to .8, and Figure 2 is a barplot that has a y-axis that runs from .5 to .9, with neither barplot starting at zero. It might make sense for journals to have an editorial board member or other person devoted to reviewing figures, to eliminate errors and improve presentation.

For example, the article indicates that (p. 6):

Figures 1 and 2 display the distribution of responses for our re-coded versions of the dependent variables graphically, using bar graphs containing 95% confidence intervals. To interpret these graphs, readers simply check to see if the confidence intervals corresponding to any given bar overlap with those of another.

But if the intent is to use confidence interval overlap to assess whether there is sufficient evidence at p<0.05 of a difference between groups, then confidence intervals closer to 85% are more appropriate. I haven't always known this, but this does seem to be knowledge that journal editors should use to foster better figures.

2. Data citation:

James S. Jackson, Vincent L. Hutchings, Ronald Brown, and Cara Wong. National Politics Study, 2004. ICPSR24483-v1. Ann Arbor, MI: Bibliographic Citation: Inter-university Consortium for Political and Social Research [distributor], 2009-03-23. doi:10.3886/ICPSR24483.v1.

Tagged with: , ,

[UPDATE] The color scheme for the first two plots has been changed, based on a comment from John, below. Original plots had the red and blue reversed [1, 2].

---

Below are plots of 0-to-100 feeling thermometer responses from the 2020 ANES Social Media Study.

---

The first plot indicates that, compared to Blacks in the oldest age category, a higher percentage of Blacks in the youngest age category reported cold feelings (under a rating of 50) toward the four included racial groups:

---

This second plot indicates that the pattern by age for Black respondents is limited to White respondents' ratings of Whites:

---

I checked data in this third plot after reading the Lee and Huang 2021 post discussing recent anti-Asian violence, which indicated that:

A recent study finds that in fact, Christian nationalism is the strongest predictor of xenophobic views of COVID-19, and the effect of Christian nationalism is greater among white respondents, compared to Black respondents.

The 2020 Social Media Study didn't appear to have good items for measuring Christian nationalism, but below I used White born again Christian Trump voters as a reasonably related group. A relatively low percentage of this group rated Asians under 50, compared to the percentage of Black respondents that rated Asians under 50.

---

And the fourth plot is for all White respondents compared to all Black respondents:

---

NOTES

[1] Data source: American National Election Studies. 2021. ANES 2020 Social Media Study: Pre-Election Data [dataset and documentation]. March 8, 2021 version. www.electionstudies.org.

[2] Stata code for the analysis and R code for the plots. Data for plots 1, 2, 3, and 4. Stata output.

Tagged with:

This plot reports disaggregated results from the American National Election Studies 2020 Time Series Study pre-election survey item:

On another topic: How much do you feel it is justified for people to use violence to pursue their political goals in this country?

Not shown is that 83% of White Democrats and 92% of White Republicans selected "Not at all" for this item.

Regression output controlling for party identification, gender, and race is in the Stata output file, along with uncertainty estimates for the plot percentages.

---

NOTES

1. Data source: American National Election Studies. 2021. ANES 2020 Time Series Study Preliminary Release: Pre-Election Data [dataset and documentation]. February 11, 2021 version. www.electionstudies.org.

2. Stata code for the analysis and R code for the plot. Dataset for the R plot.

Tagged with: , , ,

The Chudy 2021 Journal of Politics article "Racial Sympathy and Its Political Consequences" concerns White racial sympathy for Blacks.

More than a decade ago, Hutchings 2009 reported evidence about White racial sympathy for Blacks. Below is a table from Hutchings 2009 indicating that, among White liberals and White conservatives, sympathy for Blacks predicted at p<0.05 support for government policies explicitly intended to benefit Blacks such as government aid to Blacks, controlling for factors such as anti-Black stereotypes:

Chudy 2021 thanked Vincent Hutchings in the acknowledgments, and Vincent Hutchings is listed as co-chair of Jennifer Chudy's "Racial Sympathy in American Politics" dissertation. But see whether you can find in the Chudy 2021 JOP article an indication that Hutchings 2009 had reported evidence that White racial sympathy for Blacks predicted support for government policies explicitly intended to benefit Blacks.

Here is a passage from Chudy 2021 referencing Hutchings 2009:

I start by examining white support for "government aid to blacks," a broad policy area that has appeared on the ANES since the 1970s. The question asks respondents to place themselves on a 7-point scale that ranges from "Blacks Should Help Themselves" to "Government Should Help Blacks." Previous research on this question has found that racial animus leads some whites to oppose government aid to African Americans (Hutchings 2009). This analysis examines whether racial sympathy leads some white Americans to offer support for this contentious policy area.

I think that the above passages can be reasonably read as suggesting an incorrect claim that the Hutchings 2009 "previous research on this question" did not examine "whether racial sympathy leads some white Americans to offer support for this contentious policy area [of government aid to African Americans]".

---

NOTES:

1. Chudy 2021 reported results from an experiment that varied the race of a target culprit and asked participants to recommend a punishment. Chudy 2021 Figure 2 plotted estimates of recommended punishments at different levels of racial sympathy.

The Chudy 2021 analysis used a linear regression, which produced an estimated difference by race on a 0-to-100 scale of -22 at the lowest level of racial sympathy and of 41 at the highest level of racial sympathy. These differences can be seen in my plot below to the left, with a racial sympathy index coded from 0 through 16.

However, a linear relationship might not be a correct presumption. The plot to the right reports estimates calculated at each level of the racial sympathy index, so that the estimate at the highest level of racial sympathy is not influenced by cases at other levels of racial sympathy.

2. Chudy 2021 Figure 2 plots results from Chudy 2021 Table 5, but using a reversed outcome variable for some reason.

3. Chudy 2021 used the term "predicted probability" to discuss the Figure 2 / Table 5 results, but these results are predicted levels of an outcome variable that had eight levels, from "0-10 hours" to "over 70 hours" (see the bottom of the final page in the Chudy 2021 supplemental web appendix).

4. The bias detected in this experiment across all levels of racial sympathy was 13 units on a 0-to-100 scale, disfavoring the White culprit relative to the Black culprit (p=0.01) [svy: reg commservice whiteblackculprit].

5. Code for my analyses.

Tagged with:

The plot below is from Strickler and Lawson 2020 "Racial conservatism, self-monitoring, and perceptions of police violence":

I thought that the plot might be improved:

---

Key differences between the plots:

1. The original plot has a legend, which requires readers to match colors in a legend to colors of estimates. The revised plot labels the estimates without using a legend.

2. The original plot reports treatment effects on a relative scale. The revised plot reports estimates on an absolute scale, so that readers can directly see the mean percentages that rated the shooting justified, for each group in each condition.

3. The revised plot uses 83% confidence intervals, so that readers can use non-overlaps in the confidence intervals to get a sense of whether the p-value is p<0.05 for a given comparison.

4. The revised plot reverses the axes and stacks the plots vertically, so that, for instance, it's easier to perceive that the percentage of nonWhite respondents in the control that rated the shooting as justified is lower than the percentage of White respondents in the control that rated the shooting as justified, at about p=0.05.

---

The plot below repeats the plot above (left) and adds the same plot but with x-axes for each panel (right):

---

NOTES

1. Thanks to Ryan Strickler for sending me data and code for the article.

2. Code for the paired plot. Data for the plots.

3. Prior discussion of Strickler and Lawson 2020.

4. Other plot improvement posts.

Tagged with: , ,