Forthcoming at the Journal of Politics is Rice et al. 2021 "Same As It Ever Was? The Impact of Racial Resentment on White Juror Decision-Making".

---

See the prior post describing the mock juror experiment in Rice et al. 2021.

The Rice et al. 2021 team kindly cited my article questioning racial resentment as a valid measure of racial animus. But Rice et al. 2021 interpreted their results as evidence for the validity of racial resentment:

Our results also suggest that racial resentment is a valid measure of racial animus (Jardina and Piston 2019) as it performs exactly as expected in an experimental setting manipulating the race of the defendant.

However, my analyses of the Rice et al. 2021 data indicated that a measure of sexism sorted White participants by their propensity to discriminate for Bradley Schwartz or Jamal Gaines:

I don't think that the evidence in the above plot indicates that sexism is a valid measure of racial animus, so I'm not sure that racial resentment sorting White participants by their propensity to discriminate for Bradley or Jamal means that racial resentment is a valid measure of racial animus, either.

---

I think that the best two arguments against racial resentment as a measure of anti-Black animus are:

[1] Racial resentment on its face plausibly captures non-racial attitudes, and it is not clear that statistical control permits any post-statistical control residual association of racial resentment with an outcome to be interpreted as anti-Black animus, given that racial resentment net of statistical control often predicts outcomes that are not theoretically linked to racial attitudes.

[2] Persons at low levels of racial resentment often disfavor Whites relative to Blacks (as reported in this post and in the Rice et al. 2021 mock juror experiment), so the estimated effect for racial resentment cannot be interpreted as only the effect of anti-Black animus. Racial resentment in these cases appears to sort to low levels of racial resentment a sufficient percentage of respondents who dislike Whites in absolute or at least relative terms, so that indifference to Whites might plausibly be better represented at some location between the ends of the racial resentment measure. But the racial resentment measure does not have a clear indifference point such as 50 on a 0-to-100 feeling thermometer rating, so -- even if argument [1] is addressed so that statistical control isolates the effect of racial attitudes -- it's not clear how racial resentment could be used to accurately estimate the effect of only anti-Black animus.

---

NOTES

1. The sexism measure used responses to the items below, which loaded onto one factor among White participants in the data:

[UMA306bSB] We should do all we can to make sure that women have the same opportunities in society as men.

[UMA306c] We would have fewer problems if we treated men and women more equally.

[UMA306f] Many women are actually  seeking special favors, such as hiring policies that favor them over men, under the guise of asking for "equality."

[UMA306g] Women are too easily offended.

[UMA306h] Men are better suited for politics than are women.

[CC18_422c] When women lose to men in a fair competition, they typically complain about being discriminated against.

[CC18_422d] Feminists are making entirely reasonable demands of men.

Responses to these items loaded onto a different factor:

[UMA306d] Women should be cherished and protected by men.

[UMA306e] Many women have a quality of purity that few men possess.

2. Data for Rice et al. 2021 from the JOP Dataverse. Original 2018 CCES data for the UMass-A module, which I used in the aforementioned analyses. Stata code. Stata output. Data and code for the sexism plot.

3. I plan a follow-up post about how well different measures predicted racial bias in the experiment.

Tagged with: , , ,

Forthcoming at the Journal of Politics is Rice et al. 2021 "Same As It Ever Was? The Impact of Racial Resentment on White Juror Decision-Making". In contrast to the forthcoming Peyton and Huber 2021 article at the Journal of Politics that I recently blogged about, Rice et al. 2021 reported evidence that racial resentment predicted discrimination among Whites.

---

Rice et al. 2021 concerned a mock juror experiment regarding an 18-year-old starting point guard on his high school basketball team who was accused of criminal battery. Participants indicated whether the defendant was guilty or not guilty and suggested a prison sentence length from 0 to 60 months for the defendant. The experimental manipulation was that the target was randomly assigned to be named Bradley Schwartz or Jamal Gaines.

Section 10 of the Rice et al. 2021 supplementary material has nice plots of the estimated discrimination at given levels of racial resentment, indicating, for the guilty outcome, that White participants at low racial resentment were less likely to indicate that Jamal was guilty compared to Bradley, but that White participants at high racial resentment were more likely to indicate that Jamal was guilty compared to Bradley. Results were similar for the sentence length outcome, but the 95% confidence interval at high racial resentment overlaps zero a bit.

---

The experiment did not detect sufficient evidence of racial bias among White participants as a whole. But what about Black participants? Results indicated a relatively large favoring of Jamal over Bradley among Black participants, in unweighted data (N=41 per condition). For guilt, the bias was 29 percentage points in unweighted analyses, and 33 percentage points in weighted analyses. For sentence length, the bias was 8.7 months in unweighted analyses, and 9.4 months in weighted analyses, relative to a unweighted standard deviation of 16.1 months in sentence length among Black respondents.

Results for the guilty/not guilty outcome:

Results for the mean sentence length outcome:

The p-value was under p=0.05 for my unweighted tests of whether the size of the discrimination among Whites (about 7 percentage points for guilty, about 1.3 months for sentence length) differed from the size of the discrimination among Blacks (about 29 percentage points for guilty, about 8.7 months for sentence length); the inference is the same for weighted analyses. The evidence is even stronger considering that the point estimate of discrimination among Whites was in the pro-Jamal direction and not in the pro-ingroup direction.

---

NOTES

1. Data for Rice et al. 2021 from the JOP Dataverse. Original 2018 CCES data for the UMass-A module, which I used in the aforementioned analyses. Stata code. Stata output. "Guilty" plot: data and R code. "Sentence length" plot: data and R code.

2. I plan to publish a follow-up post about evidence for validity of racial resentment from the Rice et al. 2021 results, plus a follow-up post about how well different measures predicted racial bias in the experiment.

Tagged with: , , ,

Forthcoming in the Journal of Politics is Peyton and Huber 2021 "Racial Resentment, Prejudice, and Discrimination". Study 1 estimated discrimination among White MTurk workers playing a game with a White proposer or a Black proposer. The abstract indicated that:

Study 1 used the Ultimatum Game (UG) to obtain a behavioral measure of racial discrimination and found whites engaged in anti-Black discrimination. Explicit prejudice explained which whites discriminated whereas resentment did not.

I didn't see an indication in the paper about a test for whether explicit prejudice predicted discrimination against Blacks better than racial resentment did. I think that the data had 173 workers coded non-White and and 20 workers with missing data on the race variable, but Peyton and Huber 2021 reported results for only White workers, so I'll stick with that and limit my analysis to reflect their analysis in Table S1.1, which is labeled in their code as "main analysis".

My analysis indicated that the discrimination against Black proposers was 2.4 percentage points among White workers coded as prejudiced (p=0.004) and 1.3 percentage points among White workers coded as high in racial resentment (p=0.104), with a p-value of p=0.102 for a test of whether these estimates differ from each other.

---

The Peyton and Huber 2021 sorting into a prejudiced group or a not-prejudiced group based on responses to the stereotype scales permits assessment of whether the stereotype scales sorted workers by discrimination propensities, but I was also interested in the extent to which the measure of prejudice detected discrimination because the non-prejudiced comparison category included Whites who reported more negative stereotypes of Whites relative to Blacks, on net. My analysis indicated that point estimate for discrimination was:

* 2.4 percentage points against Blacks (p=0.001), among White workers who rated Blacks more negatively than Whites on net on the stereotype scales,

* 0.9 percentage points against Blacks (p=0.173), among White workers who rated Blacks equal to Whites on net on the stereotype scales, and

* 1.8 percentage points in favor of Blacks (p=0.147), among White workers who rated Blacks more positively than Whites on net on the stereotype scales.

The p-value for the difference between the 2.4 percentage point estimate and the 0.9 percentage point estimate is p=0.106, and the p-value for the difference between the 0.9 percentage point estimate and the -1.8 percentage point estimate is also p=0.106.

---

NOTES

1. I have blogged about measuring "prejudice". The Peyton and Huber 2021 definition of prejudice is not bad:

Prejudice is a negative evaluation of another person based on their group membership, whereas discrimination is a negative behavior toward that person (Dovidio and Gaertner, 1986).

But I don't think that this is how Peyton and Huber 2021 measured prejudice. I think that instead a worker was coded as prejudiced for reporting a more negative evaluation about Blacks relative to Whites, on net for the four traits that workers were asked about. That's a *relatively* more negative perception of a *group*, not a negative evaluation of an individual person based on their group.

2. Peyton and Huber 2021 used an interaction term to compare discrimination among White workers with high racial resentment to discrimination among residual White workers, and used an interaction term to compare discrimination among White workers explicitly prejudiced against Blacks relative to Whites to discrimination among residual White workers.

Line 77 of the Peyton and Huber code tests whether, in a model including both interaction terms for the "Table S1.1, main analysis" section, the estimated discrimination gap differed between the prejudice categories and the racial resentment categories. The p-value was p=0.0798 for that test.

3. Data. Stata code for my analysis. Stata output for my analysis.

Tagged with: ,

Racial resentment (also known as symbolic racism) is a common measure of racial attitudes in social science. See this post for items commonly used for racial resentment measures. For this post, I'll report plots about racial resentment, using data from the American National Election Studies 2020 Time Series Study.

---

This first plot reports the percentage of respondents that rated Whites, Blacks, Hispanics, and Asians/Asian-Americans equally on 0-to-100 feeling thermometers, at each level of a 0-to-16 racial resentment index. Respondents at the lowest level of racial resentment had a lower chance of rating the included racial groups equally, compared to respondents at moderate levels of racial resentment or even compared to respondents at the highest level of racial resentment.

---

This next plot reports the mean racial resentment for various groups. The top section is based on responses to 0-to-100 feeling thermometers about Whites, Blacks, Hispanics, and Asians/Asian-Americans. Respondents who rated all four included racial groups equally fell at about the middle of the racial resentment index, and respondents who reported isolated negative ratings about Whites (i.e., rated Whites under 50 but rated Blacks, Hispanics, and Asian/Asian-Americans at 50 or above) fell toward the low end of the racial resentment index.

The bottom two sections of the above plot report mean racial resentment based on responses to the "lazy" and "violent" stereotype items.

---

So I think that the above plots indicate that low levels of racial resentment aren't obviously normatively good.

---

Below is an update on how well racial resentment predicts attitudes about the environment and some other things that I don't expect to have a strong direct causal relationship with racial attitudes. The plot below reports OLS regression coefficients for racial resentment on a 0-to-1 scale, predicting the indicated outcomes on a 0-to-1 scale, with controls for gender, age group, education, marital status, income, partisanship, and ideology, all controlled for using categorical predictors.

The estimated effects of racial resentment on attitudes about federal spending on welfare and federal spending on crime (attitudes presumably related to race) are of similar "small to moderately small" size as the estimated effects of racial resentment on attitudes about greenhouse regulations, climate change causing severe weather, and federal spending on the environment.

Racial resentment had the ability to predict attitudes about the environment net of controls in ANES data from 1986.

---

NOTES

1. Data source: American National Election Studies. 2021. ANES 2020 Time Series Study Preliminary Release: Combined Pre-Election and Post-Election Data [dataset and documentation]. March 24, 2021 version. www.electionstudies.org.

2. Stata and R code. Dataset for Plot 1. Dataset for Plot 2. Dataset for Plot 3.

Tagged with: ,

Social Science Quarterly recently published Cooper et al. 2021 "Heritage Versus Hate: Assessing Opinions in the Debate over Confederate Monuments and Memorials". The conclusion of the article notes that:

...we uncover significant evidence that the debate over Confederate monuments can be resoundingly summarized as "hate" over "heritage"

---

In a prior post, I noted that:

...when comparing the estimated effect of predictors, inferences can depend on how well each predictor is measured, so such analyses should discuss the quality of the predictors.

Cooper et al. 2021 measured "heritage" with a dichotomous predictor and measured "hate" with a five-level predictor, and this difference in the precision of the measurements could have biased their research design toward a larger estimate for hate than for heritage. [See note 3 below for a discussion].

I'm not suggesting that the entire difference between their estimates for heritage and hate is due to the number of levels of the predictors, but I think that a better peer review would have helped eliminate that flaw in the research design, maybe by requiring the measure of hate to be dichotomized as close as possible to 70/30 like the measure of heritage was.

---

Here is the lone measure of heritage used in Cooper et al. 2021:

"Do you consider yourself a Southerner, or not?"

Table 1 of the article indicates that 70% identified as a Southerner, so even if this were a face-valid measure of Southern heritage, the measure places into the highest level of Southern heritage persons at the 35th percentile of Southern heritage.

Maybe there is more recent data that undercuts this, but data from the Spring 2001 Southern Focus Poll indicated that only about 1 in 3 respondents who identified as a Southerner indicated that being a Southerner was "very important" to them. About 1 in 3 respondents who identified as a Southerner in that 2001 poll indicated that being a Southerner was "not at all important" or "not very important" to them, and I can't think of a good reason why, without other evidence, these participants belong in the highest level of a measure of Southern heritage.

---

Wright and Esses 2017 had a more precise measure for heritage and found sufficient evidence to conclude that (p. 232):

Positive attitudes toward the Confederate battle flag were more strongly associated with Southern pride than with racial attitudes when accounting for these covariates.

How does Cooper et al. 2021 address the Wright and Esses 2017 result, which conflicts with the result from Cooper et al. 2021 and which used a related outcome variable and a better measure of heritage? The Cooper et al. 2021 article doesn't even mention Wright and Esses 2017.

---

A better peer review might have caught the minimum age of zero years old in Table 1 and objected to the description of "White people are currently under attack in this country" as operationalizing "racial resentment toward blacks" (pp. 8-9), given that this item doesn't even mention or refer to Blacks. I suppose that respondents who hate White people would be reluctant to agree that White people are under attack regardless of whether that is true. But that's not the "hate" that is supposed to be measured.

Estimating the effect of "hate" for this type of research should involve comparing estimates net of controls for respondents who have a high degree of hate for Blacks to respondents who are indifferent to Blacks. Such estimates can be biased if the estimates instead include data from respondents who have more negative feelings about Whites than about Blacks. In a prior post, I discussed Carrington and Strother 2020, which measured hate with a Black/White feeling thermometer difference and thus permitted estimation of how much of the effect of hate is due to respondents rating Blacks higher than Whites on the feeling thermometers.

---

Did Cooper et al. have access to better measures of hate than the item "White people are currently under attack in this country"? The Winthrop Poll site didn't list the Nov 2017 survey on its archived poll page for 2017. But, from what I can tell, this Winthrop University post discusses the survey, which included a better measure of racial resentment toward blacks. I don't know what information the peer reviewers of Cooper et al. 2021 had access to, but, generally, a journal reform that I would like to see for manuscripts reporting on a survey is for peer reviewers to be given access to the entire set of items for a survey.

---

In conclusion, for a study that compares the estimated effects of heritage and hate, I think that at least three things are needed: a good measure of heritage, a good measure of hate, and the good measure of heritage being of similar quality to the good measure of hate. I don't think that Cooper et al. 2021 has any of those things.

---

NOTES

1. The Spring 2001 Southern Focus Poll study was conducted by the Odum Institute for Research in Social Science of the University of North Carolina at Chapel Hill. Citation: Center for the Study of the American South, 2001, "Southern Focus Poll, Spring 2001", https://hdl.handle.net/1902.29/D-31552, UNC Dataverse, V1.

2. Stata output.

3. Suppose that mean support for leaving Confederate monuments as they are were 70% among the top 20 percent of respondents by Southern pride, 60% among the next 20 percent of respondents by Southern pride, 50% among the middle 20 percent, 40% among the next 20 percent, and 30% among the bottom 20 percent of respondents by Southern pride. And let's assume that these bottom 20 percent are indifferent about Southern pride and don't hate Southerners.

The effect of Southern pride could be estimated at 40 percentage points, which is the difference in support among the top 20 percent and bottom 20 percent by Southern pride. However, if we grouped the top 60 percent together and the bottom 40 percent together, the mean percentage support would respectively be 60% and 35%, for an estimated effect of 25 percentage points. In this illustration, the estimated effect for the five-level predictor is larger than the estimate for the dichotomous predictor, even with the same data.

Here is a visual illustration:

The above is a hypothetical to illustrate the potential bias in measuring one predictor with five levels and another predictor with two levels. I have no idea whether this had any effect on the results reported in Cooper et al. 2021. But, with a better peer review, readers would not need to worry about this type of bias in the Cooper et al. 2021 research design.

Tagged with: , , ,

The journal Politics, Groups, and Identities recently published Mangum and Block Jr. 2021 "Perceived racial discrimination, racial resentment, and support for affirmative action and preferential hiring and promotion: a multi-racial analysis".

---

The article notes that (p. 13):

Intriguingly, blame [of racial and ethnic minorities] tends to be positively associated with support for preferential hiring and promotion, and, in 2008, this positive relationship is statistically significant for Black and Asian respondents (Table A4; lower right graph in Figure 6). This finding is confounding...

But from what I can tell, this finding might be because the preferential hiring and promotion outcome variable was coded backwards to the intended coding. Table 2 of the article indicates that a higher percentage of Blacks than of Whites, Hispanics, and Asians favored preferential hiring and promotion, but Figures 1 and 2 indicate that a lower percentage of Blacks than of Whites, Hispanics, and Asians favored preferential hiring and promotion.

My analysis of data for the 2004 National Politics Study indicated that the preferential hiring and promotion results in Table 2 are correct for this survey and that blame of racial and ethnic minorities negatively associates with favoring preferential hiring and promotion.

---

Other apparent errors in the article include:

Page 4:

Borrowing from the literature on racial resentment possessed (Feldman and Huddy 2005; Kinder and Sanders 1996; Kinder and Sears 1981)...

Figures 3, 4, 5, and 6:

...holding control variable constant

Page 15:

African Americans, Hispanics, and Asians support affirmative action more than are Whites.

Page 15:

Preferential hiring and promotion is about who deserves special treatment than affirmative action, which is based more on who needs it to overcome discrimination.

Note 2:

...we code the control variables to that they fit a 0-1 scale...

---

Moreover, the article indicates that "the Supreme Court ruled that affirmative action was constitutional in California v. Bakke in 1979", which is not the correct year. And the article seems to make inconsistent claims about affirmative action: "affirmative action and preferential hiring and promotion do not benefit Whites" (p. 15), but "White women are the largest beneficiary group (Crosby et al. 2003)" (p. 13).

---

At least some of these flaws seem understandable. But I think that the number of flaws in this article is remarkably high, especially for a peer-reviewed journal with such a large editorial group: Politics, Groups, and Identities currently lists a 13-member editorial team, a 58-member editorial board, and a 9-member international advisory board.

---

NOTES

1. The article claims that (p. 15):

Regarding all races, most of the racial resentment indicators are significant statistically and in the hypothesized direction. These findings lead to the conclusion that preferential hiring and promotion foster racial thinking more than affirmative action. That is, discussions of preferential hiring and promotion lead Americans to consider their beliefs about minorities in general and African Americans in particular more than do discussions of affirmative action.

However, I'm not sure of how the claim that "preferential hiring and promotion foster racial thinking more than affirmative action" is justified by the article's results regarding racial resentment.

Maybe this refers to the slopes being steeper for the preferential hiring and promotion outcome than for the affirmative action outcome, but it would be a lot easier to eyeball slopes across figures if the y-axes were consistent across figures; instead, the y-axes run from .4 to .9 (Figure 3), .4 to 1 (Figure 4), .6 to 1 (Figure 5), and .2 to 1 (Figure 6).

Moreover, Figure 1 is a barplot that has a y-axis that runs from .4 to .8, and Figure 2 is a barplot that has a y-axis that runs from .5 to .9, with neither barplot starting at zero. It might make sense for journals to have an editorial board member or other person devoted to reviewing figures, to eliminate errors and improve presentation.

For example, the article indicates that (p. 6):

Figures 1 and 2 display the distribution of responses for our re-coded versions of the dependent variables graphically, using bar graphs containing 95% confidence intervals. To interpret these graphs, readers simply check to see if the confidence intervals corresponding to any given bar overlap with those of another.

But if the intent is to use confidence interval overlap to assess whether there is sufficient evidence at p<0.05 of a difference between groups, then confidence intervals closer to 85% are more appropriate. I haven't always known this, but this does seem to be knowledge that journal editors should use to foster better figures.

2. Data citation:

James S. Jackson, Vincent L. Hutchings, Ronald Brown, and Cara Wong. National Politics Study, 2004. ICPSR24483-v1. Ann Arbor, MI: Bibliographic Citation: Inter-university Consortium for Political and Social Research [distributor], 2009-03-23. doi:10.3886/ICPSR24483.v1.

Tagged with: , ,

The Journal of Race, Ethnicity, and Politics published Buyuker et al 2020: "Race politics research and the American presidency: thinking about white attitudes, identities and vote choice in the Trump era and beyond".

Table 2 of Buyuker et al 2020 reported regressions predicting Whites' projected and recalled vote for Donald Trump over Hillary Clinton in the 2016 U.S. presidential election, using predictors such as White identity, racial resentment, xenophobia, and sexism. Xenophobia placed into the top tier of predictors, with an estimated maximum effect of 88 percentage points going from the lowest to the highest value of the predictor, and racial resentment placed into the second tier, with an estimated maximum effect of 58 percentage points.

I was interested in whether this difference is at least partly due to how well each predictor was measured. Here are characteristics of the predictors among Whites, which indicate that xenophobia was measured at a much more granular level than racial resentment was:

RACIAL RESENTMENT
4 items
White participants fell into 22 unique levels
4% of Whites at the lowest level of racial resentment
9% of Whites at the highest level of racial resentment

XENOPHOBIA
10 items
White participants fell into 1,096 unique levels
1% of Whites at the lowest level of xenophobia
1% of Whites at the highest level of xenophobia

So it's at least plausible from the above results that xenophobia might have outperformed racial resentment merely because the measurement of xenophobia was better than the measurement of racial resentment.

---

Racial resentment was measured with four items that each had five response options, so I created a reduced xenophobia predictor using the four xenophobia items that each had exactly five response options; these items were about desired immigration levels and agreement or disagreement with statements that "Immigrants are generally good for America's economy", "America's culture is generally harmed by immigrants", and "Immigrants increase crime rates in the United States".

I re-estimated the Buyuker et al 2020 Table 2 model replacing the original xenophobia predictor with the reduced xenophobia predictor: the maximum effect for xenophobia (66 percentage points) was similar to the maximum effect for racial resentment (66 percentage points).

---

Among Whites, vote choice correlated between r=0.50 and r=0.58 with each of the four racial resentment items and between r=0.39 and r=0.56 with nine of the ten xenophobia items. The exception was the seven-point item that measured attitudes about building a wall on the U.S. border with Mexico, which correlated with vote choice at r=0.72.

Replacing the desired immigration levels item in the reduced xenophobia predictor with the border wall item produced a larger estimated maximum effect for xenophobia (85 percentage points) than for racial resentment (60 percentage points). Removing all predictors from the model except for xenophobia and racial resentment, the reduced xenophobia predictor with the border wall item still produced a larger estimated maximum effect than did racial resentment: 90 percentage points, compared to 74 percentage points.

But the larger effect for xenophobia is not completely attributable to the border wall item: using a predictor that combined the other nine xenophobia items produced a maximum effect for xenophobia (80 percentage points) that was larger than the maximum effect for racial resentment (63 percentage points).

---

I think that the main takeaway from this post is that, when comparing the estimated effect of predictors, inferences can depend on how well each predictor is measured, so such analyses should discuss the quality of the predictors. Imbalances in which participants fall into 22 levels for one predictor and 1,096 levels for another predictor seem to be biased in favor of the more granular predictor, all else equal.

Moreover, I think that, for predicting 2016 U.S. presidential vote choice, it's at least debatable whether a xenophobia predictor should include an item about a border wall with Mexico, because including that item means that, instead of xenophobia measuring attitudes about immigrants per se, the xenophobia predictor conflates these attitudes with attitudes about a policy proposal that is very closely connected with Donald Trump.

---

It's not ideal to use regression to predict maximum effects, so I estimated a model using only the racial resentment predictor and the reduced four-item xenophobia predictor with the border wall item, but including a predictor for each level of the predictors. That model predicted failure perfectly for some levels of the predictors, so I recoded the predictors until those errors were eliminated, which involved combining the three lowest racial resentment levels (so that racial resentment ran from 2 through 16) and combining the 21st and 22nd levels of the xenophobia predictor (so that xenophobia ran from 0 through 23). In a model with only those two recoded predictors, the estimated maximum effects were 81 percentage points for xenophobia and 76 percentage points for racial resentment. Using all Buyuker et al 2020 predictors, the respective percentage points were 65 and 63.

---

I then predicted Trump/Clinton vote choice using only the 22-level racial resentment predictor and the full 1,096-level xenophobia predictor, but placing the values of the predictors into ten levels; the original scale for the predictors ran from 0 through 1, and, for the 10-level predictors, the first level for each predictor was from 0 to 0.1, a second level was from above 0.1 to 0.2, and a tenth level was from above 0.9 to 1. Using these predictors as regular predictors without "factor" notation, the gap in maximum effects was about 24 percentage points, favoring xenophobia. But using these predictors with "factor" notation, the gap favoring xenophobia fell to about 9.5 percentage points.

Plots below illustrate the difference in predictions for xenophobia: the left panel uses a regular 10-level xenophobia predictor, and the right panel uses each of the 10 levels of that predictor as a separate predictor.

---

So I'm not sure that these data support the inference that xenophobia is in a higher tier than racial resentment, for predicting Trump/Clinton vote in 2016. The above analyses seem to suggest that much or all of the advantage for xenophobia over racial resentment in the Buyuker et al 2020 analyses was due to model assumptions and/or better measurement of xenophobia.

---

Another concern about Buyuker et al 2020 is with the measurement of predictors such as xenophobia. The xenophobia predictor is more accurately described as something such as attitudes about immigrants. If some participants are more favorable toward immigrants than toward natives, and if these participants locate themselves at low levels of the xenophobia predictor, then the effect of xenophilia among these participants is possibly being added to the effect of xenophobia.

Concerns are similar for predictors such as racial resentment and sexism. See here and here for evidence that low levels of similar predictors associate with bias in the opposite direction.

---

NOTES

1. Thanks to Beyza Buyuker for sending me replication materials for Buyuker et al 2020.

2. Stata code for my analyses. Stata output for my analyses.

3. ANES 2016 citations:

The American National Election Studies (ANES). 2016. ANES 2012 Time Series Study. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2016-05-17. https://doi.org/10.3886/ICPSR35157.v1.

ANES. 2017. "User's Guide and Codebook for the ANES 2016 Time Series Study". Ann Arbor, MI, and Palo Alto, CA: The University of Michigan and Stanford University.

Tagged with: , , , ,