The scientific study of White people's biases

Social Science & Medicine published Skinner-Dorkenoo et al 2022 "Highlighting COVID-19 racial disparities can reduce support for safety precautions among White U.S. residents", with data for Study 1 fielded in September 2020. Stephens-Dougan had a similar Time-sharing Experiments for the Social Sciences study "Backlash effect? White Americans' response to the coronavirus pandemic", fielded starting in late April 2020 according to the TESS page for the study.

You can check tweets about Skinner-Dorkenoo et al 2022 and what some tweeters said about White people. But you can't tell from the Skinner-Dorkenoo et al 2022 publication or the Stephens-Dougan 2022 APSR article whether any detected effect is distinctive to White people.

Limiting samples to Whites doesn't seem to be a good idea if the purpose is to understand racial bias. But it might be naive to think that all social science research is designed to understand.

---

There might be circumstances in which it's justified to limit a study of racial bias to White participants, but I don't think such circumstances include:

* The Kirgios et al 2022 audit study that experimentally manipulated the race and gender of an email requester, but for which "Participants were 2,476 White male city councillors serving in cities across the United States". In late April, I tweeted a question to the first author of Kirgios et al 2022 about why the city councilor sample was limited to White men, but I haven't yet gotten a reply.

* Studies that collect sufficient data on non-White participants but do not report results from these data in the eventual publications (examples here and here).

* Proposals for federally funded experiments that request that the sample be limited to White participants, such as in the Stephens-Dougan 2020 proposal: "I want to test whether White Americans may be more resistant to efforts to curb the virus and more supportive of protests to reopen states when the crisis is framed as disproportionately harming African Americans".

---

One benefit of not limiting the subject pool by race is to limit unfair criticism of entire racial groups. For example, according to the analysis below from Bracic et al 2022, White nationalism among non-Whites was at least as influential as White nationalism among Whites in predicting support for a family separation policy net of controls:So, to the extent that White nationalism is responsible for support for the family separation policy, that applies to White respondents and to non-White respondents.

Of course, Bracic et al. 2022 doesn't report how the association for White nationalism compares to the association for, say, Black nationalism or Hispanic nationalism or how the association for the gendered nationalist belief that "the nation has gotten too soft and feminine" compares to the association for the gendered nationalist belief that, say, "the nation is too rough and masculine".

---

And consider this suggestion from Rice et al 2022 to use racial resentment items to screen Whites for jury service:

At the practical level, our research raises important empirical and normative questions related to the use of racial resentment items during jury selection in criminal trials. If racial resentment affects jurors' votes and reasoning, should racial resentment items be used to screen white potential jurors?

Given evidence suggesting that Black juror bias is on average at least as large as White juror bias, I don't perceive a good justification to limit this suggestion to White potential jurors, although I think that the Rice et al decision to not report results for Black mock jurors makes it easier to limit this suggestion to White potential jurors.

---

NOTES

1. I caught two flaws in Skinner-Dorkenoo et al 2022, which I discussed on Twitter: [1] For the three empathy items, more than 700 respondents selected "somewhat agree", more than 500 selected "strongly agree", but no respondent selected "agree", suggesting that the data were miscoded. [2] The p-value under p=0.05 for the empathy inference appears to be because the analysis controlled for a post-treatment measure; see the second model referred to by the lead author in the Twitter thread. I didn't conduct a full check of the Skinner-Dorkenoo et al 2022 analysis. Stata code and output for my analyses of Skinner-Dorkenoo et al 2022, with data here. Note the end of the output, indicating that the post-treatment control was affected by the treatment.

2. I have a prior post about the Stephens-Dougan TESS survey experiment reported on in the APSR that had substantial deviations from the pre-analysis plan. On May 31, I contacted the APSR about that and the error discussed at the post. I received an update in September, but the Stephens-Dougan 2022 APSR article hasn't been corrected as of Oct 2.

Tagged with: , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.